[2502.14400] HPS: Hard Preference Sampling for Human Preference Alignment
About this article
Abstract page for arXiv paper 2502.14400: HPS: Hard Preference Sampling for Human Preference Alignment
Computer Science > Artificial Intelligence arXiv:2502.14400 (cs) [Submitted on 20 Feb 2025 (v1), last revised 20 Mar 2026 (this version, v5)] Title:HPS: Hard Preference Sampling for Human Preference Alignment Authors:Xiandong Zou, Wanyu Lin, Yuchen Li, Pan Zhou View a PDF of the paper titled HPS: Hard Preference Sampling for Human Preference Alignment, by Xiandong Zou and 3 other authors View PDF HTML (experimental) Abstract:Aligning Large Language Model (LLM) responses with human preferences is vital for building safe and controllable AI systems. While preference optimization methods based on Plackett-Luce (PL) and Bradley-Terry (BT) models have shown promise, they face challenges such as poor handling of harmful content, inefficient use of dispreferred responses, and, specifically for PL, high computational costs. To address these issues, we propose Hard Preference Sampling (HPS), a novel framework for robust and efficient human preference alignment. HPS introduces a training loss that prioritizes the most preferred response while rejecting all dispreferred and harmful ones. It emphasizes "hard" dispreferred responses -- those closely resembling preferred ones -- to enhance the model's rejection capabilities. By leveraging a single-sample Monte Carlo sampling strategy, HPS reduces computational overhead while maintaining alignment quality. Theoretically, HPS improves sample efficiency over existing PL methods and maximizes the reward margin between preferred and disprefe...