[2603.23565] Safe Reinforcement Learning with Preference-based Constraint Inference
About this article
Abstract page for arXiv paper 2603.23565: Safe Reinforcement Learning with Preference-based Constraint Inference
Computer Science > Machine Learning arXiv:2603.23565 (cs) [Submitted on 24 Mar 2026] Title:Safe Reinforcement Learning with Preference-based Constraint Inference Authors:Chenglin Li, Guangchun Ruan, Hua Geng View a PDF of the paper titled Safe Reinforcement Learning with Preference-based Constraint Inference, by Chenglin Li and 2 other authors View PDF HTML (experimental) Abstract:Safe reinforcement learning (RL) is a standard paradigm for safety-critical decision making. However, real-world safety constraints can be complex, subjective, and even hard to explicitly specify. Existing works on constraint inference rely on restrictive assumptions or extensive expert demonstrations, which is not realistic in many real-world applications. How to cheaply and reliably learn these constraints is the major challenge we focus on in this study. While inferring constraints from human preferences offers a data-efficient alternative, we identify the popular Bradley-Terry (BT) models fail to capture the asymmetric, heavy-tailed nature of safety costs, resulting in risk underestimation. It is still rare in the literature to understand the impacts of BT models on the downstream policy learning. To address the above knowledge gaps, we propose a novel approach namely Preference-based Constrained Reinforcement Learning (PbCRL). We introduce a novel dead zone mechanism into preference modeling and theoretically prove that it encourages heavy-tailed cost distributions, thereby achieving better ...