[2510.15746] LLMs Judge Themselves: A Game-Theoretic Framework for Human-Aligned Evaluation
About this article
Abstract page for arXiv paper 2510.15746: LLMs Judge Themselves: A Game-Theoretic Framework for Human-Aligned Evaluation
Computer Science > Computation and Language arXiv:2510.15746 (cs) [Submitted on 17 Oct 2025 (v1), last revised 6 Apr 2026 (this version, v2)] Title:LLMs Judge Themselves: A Game-Theoretic Framework for Human-Aligned Evaluation Authors:Gao Yang, Yuhang Liu, Siyu Miao, Xinyue Liang, Zhengyang Liu, Heyan Huang View a PDF of the paper titled LLMs Judge Themselves: A Game-Theoretic Framework for Human-Aligned Evaluation, by Gao Yang and 4 other authors View PDF HTML (experimental) Abstract:Ideal or real - that is the this http URL this work, we explore whether principles from game theory can be effectively applied to the evaluation of large language models (LLMs). This inquiry is motivated by the growing inadequacy of conventional evaluation practices, which often rely on fixed-format tasks with reference answers and struggle to capture the nuanced, subjective, and open-ended nature of modern LLM behavior. To address these challenges, we propose a novel alternative: automatic mutual evaluation, where LLMs assess each other's output through self-play and peer review. These peer assessments are then systematically compared with human voting behavior to evaluate their alignment with human judgment. Our framework incorporates game-theoretic voting algorithms to aggregate peer reviews, enabling a principled investigation into whether model-generated rankings reflect human preferences. Empirical results reveal both convergences and divergences between theoretical predictions and huma...