[2505.21972] LLMs Judging LLMs: A Simplex Perspective
About this article
Abstract page for arXiv paper 2505.21972: LLMs Judging LLMs: A Simplex Perspective
Computer Science > Machine Learning arXiv:2505.21972 (cs) [Submitted on 28 May 2025 (v1), last revised 5 Apr 2026 (this version, v3)] Title:LLMs Judging LLMs: A Simplex Perspective Authors:Patrick Vossler, Fan Xia, Yifan Mai, Adarsh Subbaswamy, Jean Feng View a PDF of the paper titled LLMs Judging LLMs: A Simplex Perspective, by Patrick Vossler and 4 other authors View PDF HTML (experimental) Abstract:Given the challenge of automatically evaluating free-form outputs from large language models (LLMs), an increasingly common solution is to use LLMs themselves as the judging mechanism, without any gold-standard scores. Implicitly, this practice accounts for only sampling variability (aleatoric uncertainty) and ignores uncertainty about judge quality (epistemic uncertainty). While this is justified if judges are perfectly accurate, it is unclear when such an approach is theoretically valid and practically robust. We study these questions for the task of ranking LLM candidates from a novel geometric perspective: for $M$-level scoring systems, both LLM judges and candidates can be represented as points on an $(M-1)$-dimensional probability simplex, where geometric concepts (e.g., triangle areas) correspond to key ranking concepts. This perspective yields intuitive theoretical conditions and visual proofs for when rankings are identifiable; for instance, we provide a formal basis for the ``folk wisdom'' that LLM judges are more effective for two-level scoring ($M=2$) than multi-l...