[2412.19436] Low-Rank Contextual Reinforcement Learning from Heterogeneous Human Feedback
About this article
Abstract page for arXiv paper 2412.19436: Low-Rank Contextual Reinforcement Learning from Heterogeneous Human Feedback
Statistics > Machine Learning arXiv:2412.19436 (stat) [Submitted on 27 Dec 2024 (v1), last revised 4 Mar 2026 (this version, v2)] Title:Low-Rank Contextual Reinforcement Learning from Heterogeneous Human Feedback Authors:Seong Jin Lee, Will Wei Sun, Yufeng Liu View a PDF of the paper titled Low-Rank Contextual Reinforcement Learning from Heterogeneous Human Feedback, by Seong Jin Lee and 2 other authors View PDF HTML (experimental) Abstract:Reinforcement learning from human feedback (RLHF) has become a cornerstone for aligning large language models with human preferences. However, the heterogeneity of human feedback, driven by diverse individual contexts and preferences, poses significant challenges for reward learning. To address this, we propose a Low-rank Contextual RLHF (LoCo-RLHF) framework that integrates contextual information to better model heterogeneous feedback while maintaining computational efficiency. Our approach builds on a contextual preference model, leveraging the intrinsic low-rank structure of the interaction between user contexts and query-answer pairs to mitigate the high dimensionality of feature representations. Furthermore, we address the challenge of distributional shifts in feedback through our Pessimism in Reduced Subspace (PRS) policy, inspired by pessimistic offline reinforcement learning techniques. We theoretically demonstrate that our policy achieves a tighter sub-optimality gap compared to existing methods. Extensive experiments validate ...