[2507.03772] Skewed Score: A statistical framework to assess autograders

[2507.03772] Skewed Score: A statistical framework to assess autograders

arXiv - Machine Learning 4 min read Article

Summary

The paper presents a statistical framework for assessing autograders used in evaluating LLM outputs, addressing reliability and bias issues through Bayesian generalized linear models.

Why It Matters

As the reliance on autograders for evaluating large language models increases, understanding their biases and reliability becomes crucial. This framework aids researchers in improving evaluation methods, ensuring fairer assessments and enhancing the interpretability of autograder outputs.

Key Takeaways

  • Introduces a Bayesian framework for assessing autograders.
  • Addresses reliability and bias in LLM evaluations.
  • Enhances traditional metrics with uncertainty estimates.
  • Facilitates performance analysis and bias detection.
  • Supports simultaneous assessment of autograders and research questions.

Computer Science > Machine Learning arXiv:2507.03772 (cs) [Submitted on 4 Jul 2025 (v1), last revised 26 Feb 2026 (this version, v3)] Title:Skewed Score: A statistical framework to assess autograders Authors:Magda Dubois, Harry Coppock, Mario Giulianelli, Timo Flesch, Lennart Luettgau, Cozmin Ududec View a PDF of the paper titled Skewed Score: A statistical framework to assess autograders, by Magda Dubois and 4 other authors View PDF HTML (experimental) Abstract:The evaluation of large language model (LLM) outputs is increasingly performed by other LLMs, a setup commonly known as "LLM-as-a-judge", or autograders. While autograders offer a scalable alternative to human evaluation, they have shown mixed reliability and may exhibit systematic biases, depending on response type, scoring methodology, domain specificity, or other factors. Here we propose a statistical framework based on Bayesian generalised linear models (GLMs) that enables researchers to simultaneously assess their autograders while addressing their primary research questions (e.g., LLM evaluation). Our approach models evaluation outcomes (e.g., scores or pairwise preferences) as a function of properties of the grader (e.g., human vs. autograder) and the evaluated item (e.g., response length or the LLM that generated it), allowing for explicit quantification of scoring differences and potential biases within a unified framework. In addition, our method can be used to augment traditional metrics such as inter-ra...

Related Articles

I Asked ChatGPT 500 Questions. Here Are the Ads I Saw Most Often | WIRED
Llms

I Asked ChatGPT 500 Questions. Here Are the Ads I Saw Most Often | WIRED

Ads are rolling out across the US on ChatGPT’s free tier. I asked OpenAI's bot 500 questions to see what these ads were like and how they...

Wired - AI · 9 min ·
Llms

Abacus.Ai Claw LLM consumes an incredible amount of credit without any usage :(

Three days ago, I clicked the "Deploy OpenClaw In Seconds" button to get an overview of the new service, but I didn't build any automatio...

Reddit - Artificial Intelligence · 1 min ·
Google’s Gemini AI app debuts in Hong Kong
Llms

Google’s Gemini AI app debuts in Hong Kong

Tech giant’s chatbot service tops Apple’s app store chart in the city.

AI Tools & Products · 2 min ·
Google Launches Gemini Import Tools to Poach Users From Rival AI Apps
Llms

Google Launches Gemini Import Tools to Poach Users From Rival AI Apps

Anyone looking to switch their AI assistant will find it surprisingly easy, as it only takes a few steps to move from A to B. This is not...

AI Tools & Products · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime