[2602.12444] Safe Reinforcement Learning via Recovery-based Shielding with Gaussian Process Dynamics Models

[2602.12444] Safe Reinforcement Learning via Recovery-based Shielding with Gaussian Process Dynamics Models

arXiv - AI 3 min read Article

Summary

This paper presents a novel recovery-based shielding framework for safe reinforcement learning (RL) using Gaussian process dynamics models, ensuring safety in continuous dynamical systems.

Why It Matters

As reinforcement learning is increasingly applied in safety-critical domains, establishing provable safety guarantees is essential. This research introduces a method that combines a backup policy with uncertainty quantification, enabling safer exploration and learning in RL applications.

Key Takeaways

  • Introduces a recovery-based shielding framework for safe RL.
  • Utilizes Gaussian process models for uncertainty quantification.
  • Demonstrates strong performance in continuous control environments.
  • Enables unrestricted exploration while maintaining safety compliance.
  • Provides provable safety lower bounds for unknown dynamical systems.

Computer Science > Machine Learning arXiv:2602.12444 (cs) [Submitted on 12 Feb 2026] Title:Safe Reinforcement Learning via Recovery-based Shielding with Gaussian Process Dynamics Models Authors:Alexander W. Goodall, Francesco Belardinelli View a PDF of the paper titled Safe Reinforcement Learning via Recovery-based Shielding with Gaussian Process Dynamics Models, by Alexander W. Goodall and 1 other authors View PDF HTML (experimental) Abstract:Reinforcement learning (RL) is a powerful framework for optimal decision-making and control but often lacks provable guarantees for safety-critical applications. In this paper, we introduce a novel recovery-based shielding framework that enables safe RL with a provable safety lower bound for unknown and non-linear continuous dynamical systems. The proposed approach integrates a backup policy (shield) with the RL agent, leveraging Gaussian process (GP) based uncertainty quantification to predict potential violations of safety constraints, dynamically recovering to safe trajectories only when necessary. Experience gathered by the 'shielded' agent is used to construct the GP models, with policy optimization via internal model-based sampling - enabling unrestricted exploration and sample efficient learning, without compromising safety. Empirically our approach demonstrates strong performance and strict safety-compliance on a suite of continuous control environments. Comments: Subjects: Machine Learning (cs.LG); Artificial Intelligence (c...

Related Articles

I can't help rooting for tiny open source AI model maker Arcee | TechCrunch
Llms

I can't help rooting for tiny open source AI model maker Arcee | TechCrunch

Arcee is a tiny 26-person U.S. startup that built a high-performing, massive, open source LLM. And it's gaining popularity with OpenClaw ...

TechCrunch - AI · 4 min ·
Machine Learning

We have an AI agent fragmentation problem

Every AI agent works fine on its own — but the moment you try to use more than one, everything falls apart. Different runtimes. Different...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

Using AI properly

AI is a tool. Period. I spent decades asking forums for help in writing HTML code for my website. I wanted my posts to self-scroll to a p...

Reddit - Artificial Intelligence · 1 min ·
Anthropic Teams Up With Its Rivals to Keep AI From Hacking Everything | WIRED
Llms

Anthropic Teams Up With Its Rivals to Keep AI From Hacking Everything | WIRED

The AI lab's Project Glasswing will bring together Apple, Google, and more than 45 other organizations. They'll use the new Claude Mythos...

Wired - AI · 7 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime