[2506.05619] Beyond RLHF and NLHF: Population-Proportional Alignment under an Axiomatic Framework

[2506.05619] Beyond RLHF and NLHF: Population-Proportional Alignment under an Axiomatic Framework

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2506.05619: Beyond RLHF and NLHF: Population-Proportional Alignment under an Axiomatic Framework

Computer Science > Artificial Intelligence arXiv:2506.05619 (cs) [Submitted on 5 Jun 2025 (v1), last revised 2 Mar 2026 (this version, v3)] Title:Beyond RLHF and NLHF: Population-Proportional Alignment under an Axiomatic Framework Authors:Kihyun Kim, Jiawei Zhang, Asuman Ozdaglar, Pablo A. Parrilo View a PDF of the paper titled Beyond RLHF and NLHF: Population-Proportional Alignment under an Axiomatic Framework, by Kihyun Kim and 3 other authors View PDF HTML (experimental) Abstract:Conventional preference learning methods often prioritize opinions held more widely when aggregating preferences from multiple evaluators. This may result in policies that are biased in favor of some types of opinions or groups and susceptible to strategic manipulation. To address this issue, we develop a novel preference learning framework capable of aligning aggregate opinions and policies proportionally with the true population distribution of evaluator preferences. Grounded in social choice theory, our approach infers the feasible set of evaluator population distributions directly from pairwise comparison data. Using these estimates, the algorithm constructs a policy that satisfies foundational axioms from social choice theory, namely monotonicity and Pareto efficiency, as well as our newly-introduced axioms of population-proportional alignment and population-bounded manipulability. Moreover, we propose a soft-max relaxation method that smoothly trades off population-proportional alignment ...

Originally published on March 03, 2026. Curated by AI News.

Related Articles

Robotics

AI system learns to prevent warehouse robot traffic jams, boosting throughput 25%

"Inside a giant autonomous warehouse, hundreds of robots dart down aisles as they collect and distribute items to fulfill a steady stream...

Reddit - Artificial Intelligence · 1 min ·
[2603.16673] When Should a Robot Think? Resource-Aware Reasoning via Reinforcement Learning for Embodied Robotic Decision-Making
Llms

[2603.16673] When Should a Robot Think? Resource-Aware Reasoning via Reinforcement Learning for Embodied Robotic Decision-Making

Abstract page for arXiv paper 2603.16673: When Should a Robot Think? Resource-Aware Reasoning via Reinforcement Learning for Embodied Rob...

arXiv - Machine Learning · 4 min ·
[2512.22854] ByteLoom: Weaving Geometry-Consistent Human-Object Interactions through Progressive Curriculum Learning
Machine Learning

[2512.22854] ByteLoom: Weaving Geometry-Consistent Human-Object Interactions through Progressive Curriculum Learning

Abstract page for arXiv paper 2512.22854: ByteLoom: Weaving Geometry-Consistent Human-Object Interactions through Progressive Curriculum ...

arXiv - Machine Learning · 4 min ·
[2511.14427] Self-Supervised Multisensory Pretraining for Contact-Rich Robot Reinforcement Learning
Machine Learning

[2511.14427] Self-Supervised Multisensory Pretraining for Contact-Rich Robot Reinforcement Learning

Abstract page for arXiv paper 2511.14427: Self-Supervised Multisensory Pretraining for Contact-Rich Robot Reinforcement Learning

arXiv - Machine Learning · 4 min ·
More in Robotics: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime