[2510.18478] Safe But Not Sorry: Reducing Over-Conservatism in Safety Critics via Uncertainty-Aware Modulation
Summary
This article presents the Uncertain Safety Critic (USC), a novel approach to enhance safety in reinforcement learning (RL) by balancing safety constraints and task performance.
Why It Matters
As RL systems are increasingly deployed in real-world applications, ensuring their safety without compromising performance is crucial. The USC method addresses the challenge of over-conservatism in safety critics, potentially leading to more effective and safer RL implementations.
Key Takeaways
- USC reduces safety violations by approximately 40% while maintaining or improving rewards.
- The approach allows for effective trade-offs between safety and performance in RL.
- USC significantly decreases the error in predicted cost gradients, enhancing training efficiency.
Computer Science > Machine Learning arXiv:2510.18478 (cs) [Submitted on 21 Oct 2025 (v1), last revised 18 Feb 2026 (this version, v2)] Title:Safe But Not Sorry: Reducing Over-Conservatism in Safety Critics via Uncertainty-Aware Modulation Authors:Daniel Bethell, Simos Gerasimou, Radu Calinescu, Calum Imrie View a PDF of the paper titled Safe But Not Sorry: Reducing Over-Conservatism in Safety Critics via Uncertainty-Aware Modulation, by Daniel Bethell and 3 other authors View PDF HTML (experimental) Abstract:Ensuring the safe exploration of reinforcement learning (RL) agents is critical for deployment in real-world systems. Yet existing approaches struggle to strike the right balance: methods that tightly enforce safety often cripple task performance, while those that prioritize reward leave safety constraints frequently violated, producing diffuse cost landscapes that flatten gradients and stall policy improvement. We introduce the Uncertain Safety Critic (USC), a novel approach that integrates uncertainty-aware modulation and refinement into critic training. By concentrating conservatism in uncertain and costly regions while preserving sharp gradients in safe areas, USC enables policies to achieve effective reward-safety trade-offs. Extensive experiments show that USC reduces safety violations by approximately 40% while maintaining competitive or higher rewards, and reduces the error between predicted and true cost gradients by approximately 83%, breaking the prevailing ...