[2601.20838] Reward Models Inherit Value Biases from Pretraining
About this article
Abstract page for arXiv paper 2601.20838: Reward Models Inherit Value Biases from Pretraining
Computer Science > Machine Learning arXiv:2601.20838 (cs) [Submitted on 28 Jan 2026 (v1), last revised 1 Mar 2026 (this version, v2)] Title:Reward Models Inherit Value Biases from Pretraining Authors:Brian Christian, Jessica A. F. Thompson, Elle Michelle Yang, Vincent Adam, Hannah Rose Kirk, Christopher Summerfield, Tsvetomira Dumbalska View a PDF of the paper titled Reward Models Inherit Value Biases from Pretraining, by Brian Christian and 6 other authors View PDF HTML (experimental) Abstract:Reward models (RMs) are central to aligning large language models (LLMs) with human values but have received less attention than pretrained and post-trained LLMs themselves. Because RMs are initialized from LLMs, they inherit representations that shape their behavior, but the nature and extent of this influence remain understudied. In a comprehensive study of 10 leading open-weight RMs using validated psycholinguistic corpora, we show that RMs exhibit significant differences along multiple dimensions of human value as a function of their base model. Using the "Big Two" psychological axes, we show a robust preference of Llama RMs for "agency" and a corresponding robust preference of Gemma RMs for "communion." This phenomenon holds even when the preference data and finetuning process are identical, and we trace it back to the logits of the respective instruction-tuned and pretrained models. These log-probability differences themselves can be formulated as an implicit RM; we derive usa...