[2508.11847] Dropping Just a Handful of Preferences Can Change Top Large Language Model Rankings

[2508.11847] Dropping Just a Handful of Preferences Can Change Top Large Language Model Rankings

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2508.11847: Dropping Just a Handful of Preferences Can Change Top Large Language Model Rankings

Statistics > Machine Learning arXiv:2508.11847 (stat) [Submitted on 16 Aug 2025 (v1), last revised 5 Mar 2026 (this version, v3)] Title:Dropping Just a Handful of Preferences Can Change Top Large Language Model Rankings Authors:Jenny Y. Huang, Yunyi Shen, Dennis Wei, Tamara Broderick View a PDF of the paper titled Dropping Just a Handful of Preferences Can Change Top Large Language Model Rankings, by Jenny Y. Huang and 3 other authors View PDF HTML (experimental) Abstract:We propose a method for evaluating the robustness of widely used LLM ranking systems -- variants of a Bradley--Terry model -- to dropping a worst-case very small fraction of preference data. Our approach is computationally fast and easy to adopt. When we apply our method to matchups from popular LLM ranking platforms, including Chatbot Arena and derivatives, we find that the rankings of top-performing models can be remarkably sensitive to the removal of a small fraction of preferences; for instance, dropping just 0.003% of human preferences can change the top-ranked model on Chatbot Arena. Our robustness check identifies the specific preferences most responsible for such ranking flips, allowing for inspection of these influential preferences. We observe that the rankings derived from MT-bench preferences are notably more robust than those from Chatbot Arena, likely due to MT-bench's use of expert annotators and carefully constructed prompts. Finally, we find that neither rankings based on crowdsourced hum...

Originally published on March 06, 2026. Curated by AI News.

Related Articles

Llms

[D] How to break free from LLM's chains as a PhD student?

I didn't realize but over a period of one year i have become overreliant on ChatGPT to write code, I am a second year PhD student and don...

Reddit - Machine Learning · 1 min ·
Llms

[R] Reference model free behavioral discovery of AudiBench model organisms via Probe-Mediated Adaptive Auditing

Anthropic's AuditBench - 56 Llama 3.3 70B models with planted hidden behaviors - their best agent detects the behaviros 10-13% of the tim...

Reddit - Machine Learning · 1 min ·
Llms

[P] Dante-2B: I'm training a 2.1B bilingual fully open Italian/English LLM from scratch on 2×H200. Phase 1 done — here's what I've built.

The problem If you work with Italian text and local models, you know the pain. Every open-source LLM out there treats Italian as an after...

Reddit - Machine Learning · 1 min ·
Llms

I have been coding for 11 years and I caught myself completely unable to debug a problem without AI assistance last month. That scared me more than anything I have seen in this industry.

I want to be honest about something that happened to me because I think it is more common than people admit. Last month I hit a bug in a ...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime