[2602.16610] Who can we trust? LLM-as-a-jury for Comparative Assessment
Summary
The paper explores the reliability of large language models (LLMs) as evaluators in natural language generation tasks, proposing a new model, BT-sigma, to improve judgment accuracy and reliability.
Why It Matters
As LLMs are increasingly used for automated assessments, understanding their reliability is crucial for ensuring fair and accurate evaluations. This research addresses inconsistencies in LLM judgments and proposes a method to enhance their effectiveness, which is vital for advancing AI applications in natural language processing.
Key Takeaways
- LLMs show substantial variability in performance across tasks.
- Existing aggregation methods may not accurately reflect judge reliability.
- The BT-sigma model introduces a discriminator parameter to enhance judgment accuracy.
- Empirical results indicate BT-sigma outperforms traditional averaging methods.
- The model serves as an unsupervised calibration mechanism for LLM evaluations.
Computer Science > Computation and Language arXiv:2602.16610 (cs) [Submitted on 18 Feb 2026] Title:Who can we trust? LLM-as-a-jury for Comparative Assessment Authors:Mengjie Qian, Guangzhi Sun, Mark J.F. Gales, Kate M. Knill View a PDF of the paper titled Who can we trust? LLM-as-a-jury for Comparative Assessment, by Mengjie Qian and 3 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) are increasingly applied as automatic evaluators for natural language generation assessment often using pairwise comparative judgements. Existing approaches typically rely on single judges or aggregate multiple judges assuming equal reliability. In practice, LLM judges vary substantially in performance across tasks and aspects, and their judgment probabilities may be biased and inconsistent. Furthermore, human-labelled supervision for judge calibration may be unavailable. We first empirically demonstrate that inconsistencies in LLM comparison probabilities exist and show that it limits the effectiveness of direct probability-based ranking. To address this, we study the LLM-as-a-jury setting and propose BT-sigma, a judge-aware extension of the Bradley-Terry model that introduces a discriminator parameter for each judge to jointly infer item rankings and judge reliability from pairwise comparisons alone. Experiments on benchmark NLG evaluation datasets show that BT-sigma consistently outperforms averaging-based aggregation methods, and that the learned discriminat...