[2601.22664] Real-Time Aligned Reward Model beyond Semantics
About this article
Abstract page for arXiv paper 2601.22664: Real-Time Aligned Reward Model beyond Semantics
Computer Science > Artificial Intelligence arXiv:2601.22664 (cs) [Submitted on 30 Jan 2026 (v1), last revised 27 Feb 2026 (this version, v2)] Title:Real-Time Aligned Reward Model beyond Semantics Authors:Zixuan Huang, Xin Xia, Yuxi Ren, Jianbin Zheng, Xuefeng Xiao, Hongyan Xie, Li Huaqiu, Songshi Liang, Zhongxiang Dai, Fuzhen Zhuang, Jianxin Li, Yikun Ban, Deqing Wang View a PDF of the paper titled Real-Time Aligned Reward Model beyond Semantics, by Zixuan Huang and 11 other authors View PDF HTML (experimental) Abstract:Reinforcement Learning from Human Feedback (RLHF) is a pivotal technique for aligning large language models (LLMs) with human preferences, yet it is susceptible to reward overoptimization, in which policy models overfit to the reward model, exploit spurious reward patterns instead of faithfully capturing human intent. Prior mitigations primarily relies on surface semantic information and fails to efficiently address the misalignment between the reward model (RM) and the policy model caused by continuous policy distribution shifts. This inevitably leads to an increasing reward discrepancy, exacerbating reward overoptimization. To address these limitations, we introduce R2M (Real-Time Aligned Reward Model), a novel lightweight RLHF framework. R2M goes beyond vanilla reward models that solely depend on the semantic representations of a pretrained LLM. Instead, it leverages the evolving hidden states of the policy (namely policy feedback) to align with the real...