[2502.01534] Preference Leakage: A Contamination Problem in LLM-as-a-judge
About this article
Abstract page for arXiv paper 2502.01534: Preference Leakage: A Contamination Problem in LLM-as-a-judge
Computer Science > Machine Learning arXiv:2502.01534 (cs) [Submitted on 3 Feb 2025 (v1), last revised 4 Mar 2026 (this version, v3)] Title:Preference Leakage: A Contamination Problem in LLM-as-a-judge Authors:Dawei Li, Renliang Sun, Yue Huang, Ming Zhong, Bohan Jiang, Jiawei Han, Xiangliang Zhang, Wei Wang, Huan Liu View a PDF of the paper titled Preference Leakage: A Contamination Problem in LLM-as-a-judge, by Dawei Li and 8 other authors View PDF HTML (experimental) Abstract:Large Language Models (LLMs) as judges and LLM-based data synthesis have emerged as two fundamental LLM-driven data annotation methods in model development. While their combination significantly enhances the efficiency of model training and evaluation, little attention has been given to the potential contamination brought by this new model development paradigm. In this work, we expose preference leakage, a contamination problem in LLM-as-a-judge caused by the relatedness between the synthetic data generators and LLM-based evaluators. To study this issue, we first define three common relatednesses between the data generator LLM and the judge LLM: being the same model, having an inheritance relationship, and belonging to the same model family. Through extensive experiments, we empirically confirm the bias of judges towards their related student models caused by preference leakage across multiple LLM baselines and benchmarks. Further analysis suggests that preference leakage is a pervasive and real-worl...