LLM-as-a-Grader: Practical Insights from Large Language Model for Short-Answer and Report Evaluation
NeutralArtificial Intelligence
A recent study published on arXiv investigates the use of Large Language Models (LLMs), specifically GPT-4o, for grading short-answer quizzes and project reports in an undergraduate Computational Linguistics course. The research involved approximately 50 students and 14 project teams, comparing LLM-generated scores with evaluations from teaching assistants. Results indicated a strong correlation (up to 0.98) with human graders and exact score agreement in 55% of quiz cases, highlighting both the potential and limitations of LLM-based grading systems.
— via World Pulse Now AI Editorial System

