[2605.06895] Mitigating Cognitive Bias in RLHF by Altering Rationality
About this article
Abstract page for arXiv paper 2605.06895: Mitigating Cognitive Bias in RLHF by Altering Rationality
Computer Science > Artificial Intelligence arXiv:2605.06895 (cs) [Submitted on 7 May 2026] Title:Mitigating Cognitive Bias in RLHF by Altering Rationality Authors:Tiffany Horter, Andrew Markham, Niki Trigoni, Serena Booth View a PDF of the paper titled Mitigating Cognitive Bias in RLHF by Altering Rationality, by Tiffany Horter and Andrew Markham and Niki Trigoni and Serena Booth View PDF HTML (experimental) Abstract:How can we make models robust to even imperfect human feedback? In reinforcement learning from human feedback (RLHF), human preferences over model outputs are used to train a reward model that assigns scalar values to responses. Because these rewards are inferred from pairwise comparisons, this learning depends on an assumed relationship between latent reward differences and observed preferences, typically modeled using a Boltzmann formulation in which a rationality parameter beta informs how consistently preferences reflect reward differences. In practice, beta is typically treated as a fixed constant that reflects assumed uniform annotator reliability. However, human feedback is not this simplistic in practice: real human judgments are shaped by cognitive biases, leading to systematic deviations from reward-consistent behavior that arise contextually. To address this, we treat rationality as context- and annotation-dependent. We design an approach to dynamically adjust the rationality parameter beta during reward learning using an LLM-as-judge to assess the ...