[2603.23184] ImplicitRM: Unbiased Reward Modeling from Implicit Preference Data for LLM alignment
About this article
Abstract page for arXiv paper 2603.23184: ImplicitRM: Unbiased Reward Modeling from Implicit Preference Data for LLM alignment
Computer Science > Computation and Language arXiv:2603.23184 (cs) [Submitted on 24 Mar 2026] Title:ImplicitRM: Unbiased Reward Modeling from Implicit Preference Data for LLM alignment Authors:Hao Wang, Haocheng Yang, Licheng Pan, Lei Shen, Xiaoxi Li, Yinuo Wang, Zhichao Chen, Yuan Lu, Haoxuan Li, Zhouchen Lin View a PDF of the paper titled ImplicitRM: Unbiased Reward Modeling from Implicit Preference Data for LLM alignment, by Hao Wang and 9 other authors View PDF HTML (experimental) Abstract:Reward modeling represents a long-standing challenge in reinforcement learning from human feedback (RLHF) for aligning language models. Current reward modeling is heavily contingent upon experimental feedback data with high collection costs. In this work, we study \textit{implicit reward modeling} -- learning reward models from implicit human feedback (e.g., clicks and copies) -- as a cost-effective alternative. We identify two fundamental challenges in implicit reward modeling: (1) Implicit preference data lacks definitive negative samples, which makes standard positive-negative classification methods inapplicable; (2) Implicit preference data suffers from user preference bias, where different responses have different propensities to elicit user feedback actions, which exacerbates the difficulty of distinguishing definitive negative samples. To address these challenges, we propose ImplicitRM, which aims to learn unbiased reward models from implicit preference data. ImplicitRM stratif...