[2604.07343] Personalized RewardBench: Evaluating Reward Models with Human Aligned Personalization
About this article
Abstract page for arXiv paper 2604.07343: Personalized RewardBench: Evaluating Reward Models with Human Aligned Personalization
Computer Science > Computation and Language arXiv:2604.07343 (cs) [Submitted on 8 Apr 2026] Title:Personalized RewardBench: Evaluating Reward Models with Human Aligned Personalization Authors:Qiyao Ma, Dechen Gao, Rui Cai, Boqi Zhao, Hanchu Zhou, Junshan Zhang, Zhe Zhao View a PDF of the paper titled Personalized RewardBench: Evaluating Reward Models with Human Aligned Personalization, by Qiyao Ma and 6 other authors View PDF HTML (experimental) Abstract:Pluralistic alignment has emerged as a critical frontier in the development of Large Language Models (LLMs), with reward models (RMs) serving as a central mechanism for capturing diverse human values. While benchmarks for general response quality are prevalent, evaluating how well reward models account for individual user preferences remains an open challenge. To bridge this gap, we introduce Personalized RewardBench, a novel benchmark designed to rigorously assess reward models' capacity to model personalized preferences. We construct chosen and rejected response pairs based on strict adherence to (or violation of) user-specific rubrics, ensuring that preference distinctions are uniquely tailored to the individual. In particular, human evaluations confirm that the primary discriminative factor between pairs is strictly personal preference, with both responses maintaining high general quality (e.g., correctness, relevance and helpfulness). Extensive testing reveals that existing state-of-the-art reward models struggle sign...