[2509.24159] RE-PO: Robust Enhanced Policy Optimization as a General Framework for LLM Alignment
About this article
Abstract page for arXiv paper 2509.24159: RE-PO: Robust Enhanced Policy Optimization as a General Framework for LLM Alignment
Computer Science > Artificial Intelligence arXiv:2509.24159 (cs) [Submitted on 29 Sep 2025 (v1), last revised 27 Feb 2026 (this version, v4)] Title:RE-PO: Robust Enhanced Policy Optimization as a General Framework for LLM Alignment Authors:Xiaoyang Cao, Zelai Xu, Mo Guang, Kaiwen Long, Michiel A. Bakker, Yu Wang, Chao Yu View a PDF of the paper titled RE-PO: Robust Enhanced Policy Optimization as a General Framework for LLM Alignment, by Xiaoyang Cao and 5 other authors View PDF HTML (experimental) Abstract:Standard human preference-based alignment methods, such as Reinforcement Learning from Human Feedback (RLHF), are a cornerstone for aligning large language models (LLMs) with human values. However, these methods typically assume that preference data is clean and that all labels are equally reliable. In practice, large-scale preference datasets contain substantial noise due to annotator mistakes, inconsistent instructions, varying expertise, and even adversarial or low-effort feedback. This mismatch between recorded labels and ground-truth preferences can misguide training and degrade model performance. To address this issue, we introduce Robust Enhanced Policy Optimization (RE-PO), which uses an expectation-maximization procedure to infer the posterior correctness of each label and then adaptively reweight data points in the training loss to mitigate label noise. We further generalize this idea by establishing a theoretical link between arbitrary preference losses and t...