[2602.16610] Who can we trust? LLM-as-a-jury for Comparative Assessment

[2602.16610] Who can we trust? LLM-as-a-jury for Comparative Assessment

arXiv - Machine Learning 3 min read Article

Summary

The paper explores the reliability of large language models (LLMs) as evaluators in natural language generation tasks, proposing a new model, BT-sigma, to improve judgment accuracy and reliability.

Why It Matters

As LLMs are increasingly used for automated assessments, understanding their reliability is crucial for ensuring fair and accurate evaluations. This research addresses inconsistencies in LLM judgments and proposes a method to enhance their effectiveness, which is vital for advancing AI applications in natural language processing.

Key Takeaways

  • LLMs show substantial variability in performance across tasks.
  • Existing aggregation methods may not accurately reflect judge reliability.
  • The BT-sigma model introduces a discriminator parameter to enhance judgment accuracy.
  • Empirical results indicate BT-sigma outperforms traditional averaging methods.
  • The model serves as an unsupervised calibration mechanism for LLM evaluations.

Computer Science > Computation and Language arXiv:2602.16610 (cs) [Submitted on 18 Feb 2026] Title:Who can we trust? LLM-as-a-jury for Comparative Assessment Authors:Mengjie Qian, Guangzhi Sun, Mark J.F. Gales, Kate M. Knill View a PDF of the paper titled Who can we trust? LLM-as-a-jury for Comparative Assessment, by Mengjie Qian and 3 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) are increasingly applied as automatic evaluators for natural language generation assessment often using pairwise comparative judgements. Existing approaches typically rely on single judges or aggregate multiple judges assuming equal reliability. In practice, LLM judges vary substantially in performance across tasks and aspects, and their judgment probabilities may be biased and inconsistent. Furthermore, human-labelled supervision for judge calibration may be unavailable. We first empirically demonstrate that inconsistencies in LLM comparison probabilities exist and show that it limits the effectiveness of direct probability-based ranking. To address this, we study the LLM-as-a-jury setting and propose BT-sigma, a judge-aware extension of the Bradley-Terry model that introduces a discriminator parameter for each judge to jointly infer item rankings and judge reliability from pairwise comparisons alone. Experiments on benchmark NLG evaluation datasets show that BT-sigma consistently outperforms averaging-based aggregation methods, and that the learned discriminat...

Related Articles

Llms

The person who replaces you probably won't be AI. It'll be someone from the next department over who learned to use it - opinion/discussion

I'm a strategy person by background. Two years ago I'd write a recommendation and hand it to a product team. Now.. I describe what I want...

Reddit - Artificial Intelligence · 1 min ·
Block Resets Management With AI As Cash App Adds Installment Transfers
Llms

Block Resets Management With AI As Cash App Adds Installment Transfers

Block (NYSE:XYZ) plans a permanent organizational overhaul that replaces many middle management roles with AI-driven models to create fla...

AI Tools & Products · 5 min ·
Anthropic leaks source code for its AI coding agent Claude
Llms

Anthropic leaks source code for its AI coding agent Claude

Anthropic accidentally exposed roughly 512,000 lines of proprietary TypeScript source code for its AI-powered coding agent Claude Code

AI Tools & Products · 3 min ·
AI Desktop 98 lets you chat with Claude, ChatGPT, and Gemini through a Windows 98-inspired interface
Llms

AI Desktop 98 lets you chat with Claude, ChatGPT, and Gemini through a Windows 98-inspired interface

It even has Minesweeper.

AI Tools & Products · 3 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime