[2602.12444] Safe Reinforcement Learning via Recovery-based Shielding with Gaussian Process Dynamics Models
Summary
This paper presents a novel recovery-based shielding framework for safe reinforcement learning (RL) using Gaussian process dynamics models, ensuring safety in continuous dynamical systems.
Why It Matters
As reinforcement learning is increasingly applied in safety-critical domains, establishing provable safety guarantees is essential. This research introduces a method that combines a backup policy with uncertainty quantification, enabling safer exploration and learning in RL applications.
Key Takeaways
- Introduces a recovery-based shielding framework for safe RL.
- Utilizes Gaussian process models for uncertainty quantification.
- Demonstrates strong performance in continuous control environments.
- Enables unrestricted exploration while maintaining safety compliance.
- Provides provable safety lower bounds for unknown dynamical systems.
Computer Science > Machine Learning arXiv:2602.12444 (cs) [Submitted on 12 Feb 2026] Title:Safe Reinforcement Learning via Recovery-based Shielding with Gaussian Process Dynamics Models Authors:Alexander W. Goodall, Francesco Belardinelli View a PDF of the paper titled Safe Reinforcement Learning via Recovery-based Shielding with Gaussian Process Dynamics Models, by Alexander W. Goodall and 1 other authors View PDF HTML (experimental) Abstract:Reinforcement learning (RL) is a powerful framework for optimal decision-making and control but often lacks provable guarantees for safety-critical applications. In this paper, we introduce a novel recovery-based shielding framework that enables safe RL with a provable safety lower bound for unknown and non-linear continuous dynamical systems. The proposed approach integrates a backup policy (shield) with the RL agent, leveraging Gaussian process (GP) based uncertainty quantification to predict potential violations of safety constraints, dynamically recovering to safe trajectories only when necessary. Experience gathered by the 'shielded' agent is used to construct the GP models, with policy optimization via internal model-based sampling - enabling unrestricted exploration and sample efficient learning, without compromising safety. Empirically our approach demonstrates strong performance and strict safety-compliance on a suite of continuous control environments. Comments: Subjects: Machine Learning (cs.LG); Artificial Intelligence (c...