[2512.01351] Benchmarking Overton Pluralism in LLMs
About this article
Abstract page for arXiv paper 2512.01351: Benchmarking Overton Pluralism in LLMs
Computer Science > Artificial Intelligence arXiv:2512.01351 (cs) [Submitted on 1 Dec 2025 (v1), last revised 2 Mar 2026 (this version, v2)] Title:Benchmarking Overton Pluralism in LLMs Authors:Elinor Poole-Dayan, Jiayi Wu, Taylor Sorensen, Jiaxin Pei, Michiel A. Bakker View a PDF of the paper titled Benchmarking Overton Pluralism in LLMs, by Elinor Poole-Dayan and 4 other authors View PDF HTML (experimental) Abstract:We introduce OVERTONBENCH, a novel framework for measuring Overton pluralism in LLMs--the extent to which diverse viewpoints are represented in model outputs. We (i) formalize Overton pluralism as a set coverage metric (OVERTONSCORE), (ii) conduct a large-scale U.S.-representative human study (N = 1208; 60 questions; 8 LLMs), and (iii) develop an automated benchmark that closely reproduces human judgments. On average, models achieve OVERTONSCOREs of 0.35--0.41, with DeepSeek V3 performing best; yet all models remain far below the theoretical maximum of 1.0, revealing substantial headroom for improvement. Because repeated large-scale human studies are costly and slow, scalable evaluation tools are essential for model development. Hence, we propose an automated benchmark that achieves high rank correlation with human judgments ($\rho = 0.88$), providing a practical proxy without replacing human assessment. By turning pluralistic alignment from a normative aim into a measurable benchmark, our work establishes a foundation for systematic progress toward more plura...