[2507.03772] Skewed Score: A statistical framework to assess autograders
Summary
The paper presents a statistical framework for assessing autograders used in evaluating LLM outputs, addressing reliability and bias issues through Bayesian generalized linear models.
Why It Matters
As the reliance on autograders for evaluating large language models increases, understanding their biases and reliability becomes crucial. This framework aids researchers in improving evaluation methods, ensuring fairer assessments and enhancing the interpretability of autograder outputs.
Key Takeaways
- Introduces a Bayesian framework for assessing autograders.
- Addresses reliability and bias in LLM evaluations.
- Enhances traditional metrics with uncertainty estimates.
- Facilitates performance analysis and bias detection.
- Supports simultaneous assessment of autograders and research questions.
Computer Science > Machine Learning arXiv:2507.03772 (cs) [Submitted on 4 Jul 2025 (v1), last revised 26 Feb 2026 (this version, v3)] Title:Skewed Score: A statistical framework to assess autograders Authors:Magda Dubois, Harry Coppock, Mario Giulianelli, Timo Flesch, Lennart Luettgau, Cozmin Ududec View a PDF of the paper titled Skewed Score: A statistical framework to assess autograders, by Magda Dubois and 4 other authors View PDF HTML (experimental) Abstract:The evaluation of large language model (LLM) outputs is increasingly performed by other LLMs, a setup commonly known as "LLM-as-a-judge", or autograders. While autograders offer a scalable alternative to human evaluation, they have shown mixed reliability and may exhibit systematic biases, depending on response type, scoring methodology, domain specificity, or other factors. Here we propose a statistical framework based on Bayesian generalised linear models (GLMs) that enables researchers to simultaneously assess their autograders while addressing their primary research questions (e.g., LLM evaluation). Our approach models evaluation outcomes (e.g., scores or pairwise preferences) as a function of properties of the grader (e.g., human vs. autograder) and the evaluated item (e.g., response length or the LLM that generated it), allowing for explicit quantification of scoring differences and potential biases within a unified framework. In addition, our method can be used to augment traditional metrics such as inter-ra...