[2510.18196] Contrastive Decoding Mitigates Score Range Bias in LLM-as-a-Judge

[2510.18196] Contrastive Decoding Mitigates Score Range Bias in LLM-as-a-Judge

arXiv - AI 3 min read

About this article

Abstract page for arXiv paper 2510.18196: Contrastive Decoding Mitigates Score Range Bias in LLM-as-a-Judge

Computer Science > Computation and Language arXiv:2510.18196 (cs) [Submitted on 21 Oct 2025 (v1), last revised 8 Apr 2026 (this version, v2)] Title:Contrastive Decoding Mitigates Score Range Bias in LLM-as-a-Judge Authors:Yoshinari Fujinuma View a PDF of the paper titled Contrastive Decoding Mitigates Score Range Bias in LLM-as-a-Judge, by Yoshinari Fujinuma View PDF Abstract:Large Language Models (LLMs) are commonly used as evaluators in various applications, but the reliability of the outcomes remains a challenge. One such challenge is using LLMs-as-judges for direct assessment, i.e., assigning scores from a specified range without any references. Focusing on summarization, we first show that this challenge stems from LLM judge outputs being associated with score range bias, i.e., LLM judge outputs are highly sensitive to pre-defined score ranges. We also show that similar biases exist among models from the same family. We then mitigate this bias through contrastive decoding, achieving up to 11.7% relative improvement on average in Spearman correlation with human judgments across different score ranges. Comments: Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI) Cite as: arXiv:2510.18196 [cs.CL]   (or arXiv:2510.18196v2 [cs.CL] for this version)   https://doi.org/10.48550/arXiv.2510.18196 Focus to learn more arXiv-issued DOI via DataCite Submission history From: Yoshinari Fujinuma [view email] [v1] Tue, 21 Oct 2025 00:47:11 UTC (9,026 KB) [v2] W...

Originally published on April 09, 2026. Curated by AI News.

Related Articles

Llms

Vance says Iran sent 3 different versions of 10-point proposal, one of them 'written by ChatGPT'

submitted by /u/esporx [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
[2601.22451] Countering the Over-Reliance Trap: Mitigating Object Hallucination for LVLMs via a Self-Validation Framework
Llms

[2601.22451] Countering the Over-Reliance Trap: Mitigating Object Hallucination for LVLMs via a Self-Validation Framework

Abstract page for arXiv paper 2601.22451: Countering the Over-Reliance Trap: Mitigating Object Hallucination for LVLMs via a Self-Validat...

arXiv - AI · 4 min ·
[2601.21463] Unifying Speech Editing Detection and Content Localization via Prior-Enhanced Audio LLMs
Llms

[2601.21463] Unifying Speech Editing Detection and Content Localization via Prior-Enhanced Audio LLMs

Abstract page for arXiv paper 2601.21463: Unifying Speech Editing Detection and Content Localization via Prior-Enhanced Audio LLMs

arXiv - AI · 4 min ·
[2601.16206] Computer Environments Elicit General Agentic Intelligence in LLMs
Llms

[2601.16206] Computer Environments Elicit General Agentic Intelligence in LLMs

Abstract page for arXiv paper 2601.16206: Computer Environments Elicit General Agentic Intelligence in LLMs

arXiv - AI · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime