[2510.18196] Contrastive Decoding Mitigates Score Range Bias in LLM-as-a-Judge
About this article
Abstract page for arXiv paper 2510.18196: Contrastive Decoding Mitigates Score Range Bias in LLM-as-a-Judge
Computer Science > Computation and Language arXiv:2510.18196 (cs) [Submitted on 21 Oct 2025 (v1), last revised 8 Apr 2026 (this version, v2)] Title:Contrastive Decoding Mitigates Score Range Bias in LLM-as-a-Judge Authors:Yoshinari Fujinuma View a PDF of the paper titled Contrastive Decoding Mitigates Score Range Bias in LLM-as-a-Judge, by Yoshinari Fujinuma View PDF Abstract:Large Language Models (LLMs) are commonly used as evaluators in various applications, but the reliability of the outcomes remains a challenge. One such challenge is using LLMs-as-judges for direct assessment, i.e., assigning scores from a specified range without any references. Focusing on summarization, we first show that this challenge stems from LLM judge outputs being associated with score range bias, i.e., LLM judge outputs are highly sensitive to pre-defined score ranges. We also show that similar biases exist among models from the same family. We then mitigate this bias through contrastive decoding, achieving up to 11.7% relative improvement on average in Spearman correlation with human judgments across different score ranges. Comments: Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI) Cite as: arXiv:2510.18196 [cs.CL] (or arXiv:2510.18196v2 [cs.CL] for this version) https://doi.org/10.48550/arXiv.2510.18196 Focus to learn more arXiv-issued DOI via DataCite Submission history From: Yoshinari Fujinuma [view email] [v1] Tue, 21 Oct 2025 00:47:11 UTC (9,026 KB) [v2] W...