LLMGrader is an AI-based assistant designed to support instructors in grading written response questions. It enables scalable assessment of higher-order thinking while preserving instructor oversight, academic judgment, and data privacy.
Written response questions are widely recognized as one of the most effective methods for evaluating student understanding, reasoning, and synthesis. However, their use is often constrained by practical limitations.
LLMGrader is designed to function as a teaching assistant, not as a replacement for instructor judgment. It assists with initial grading and feedback generation while keeping the instructor fully in control of final outcomes.
The instructor provides contextual information, including the question, a reference solution, grading rubric, and point allocation.
Student responses are evaluated using a large language model guided strictly by the provided rubric and instructional context.
Instructors review generated grades and feedback, making adjustments as needed before finalizing results.
Final grades and feedback can be exported for upload into learning management systems.
LLMGrader is designed with institutional constraints and privacy requirements in mind.
Beyond grading, LLMGrader provides aggregate analysis of student responses, helping instructors identify common misunderstandings, recurring themes, and areas of strong comprehension.