[2508.11847] Dropping Just a Handful of Preferences Can Change Top Large Language Model Rankings
About this article
Abstract page for arXiv paper 2508.11847: Dropping Just a Handful of Preferences Can Change Top Large Language Model Rankings
Statistics > Machine Learning arXiv:2508.11847 (stat) [Submitted on 16 Aug 2025 (v1), last revised 5 Mar 2026 (this version, v3)] Title:Dropping Just a Handful of Preferences Can Change Top Large Language Model Rankings Authors:Jenny Y. Huang, Yunyi Shen, Dennis Wei, Tamara Broderick View a PDF of the paper titled Dropping Just a Handful of Preferences Can Change Top Large Language Model Rankings, by Jenny Y. Huang and 3 other authors View PDF HTML (experimental) Abstract:We propose a method for evaluating the robustness of widely used LLM ranking systems -- variants of a Bradley--Terry model -- to dropping a worst-case very small fraction of preference data. Our approach is computationally fast and easy to adopt. When we apply our method to matchups from popular LLM ranking platforms, including Chatbot Arena and derivatives, we find that the rankings of top-performing models can be remarkably sensitive to the removal of a small fraction of preferences; for instance, dropping just 0.003% of human preferences can change the top-ranked model on Chatbot Arena. Our robustness check identifies the specific preferences most responsible for such ranking flips, allowing for inspection of these influential preferences. We observe that the rankings derived from MT-bench preferences are notably more robust than those from Chatbot Arena, likely due to MT-bench's use of expert annotators and carefully constructed prompts. Finally, we find that neither rankings based on crowdsourced hum...