[2602.19983] Contextual Safety Reasoning and Grounding for Open-World Robots

[2602.19983] Contextual Safety Reasoning and Grounding for Open-World Robots

arXiv - AI 4 min read Article

Summary

The paper presents CORE, a novel safety framework for open-world robots that enables contextual reasoning and enforcement of safety rules based on real-time visual observations.

Why It Matters

As robots increasingly operate in dynamic environments, ensuring their safety through adaptable strategies is crucial. CORE addresses the limitations of traditional safety methods by allowing robots to reason about their surroundings in real-time, thereby enhancing their operational safety and effectiveness in unpredictable contexts.

Key Takeaways

  • CORE framework allows online contextual reasoning for robot safety.
  • Utilizes vision-language models to derive safety rules from visual data.
  • Offers probabilistic safety guarantees despite perceptual uncertainties.
  • Demonstrates superior performance compared to traditional semantic safety methods.
  • Validates the importance of spatial grounding in enforcing contextual safety.

Computer Science > Robotics arXiv:2602.19983 (cs) [Submitted on 23 Feb 2026] Title:Contextual Safety Reasoning and Grounding for Open-World Robots Authors:Zachary Ravichadran, David Snyder, Alexander Robey, Hamed Hassani, Vijay Kumar, George J. Pappas View a PDF of the paper titled Contextual Safety Reasoning and Grounding for Open-World Robots, by Zachary Ravichadran and 5 other authors View PDF HTML (experimental) Abstract:Robots are increasingly operating in open-world environments where safe behavior depends on context: the same hallway may require different navigation strategies when crowded versus empty, or during an emergency versus normal operations. Traditional safety approaches enforce fixed constraints in user-specified contexts, limiting their ability to handle the open-ended contextual variability of real-world deployment. We address this gap via CORE, a safety framework that enables online contextual reasoning, grounding, and enforcement without prior knowledge of the environment (e.g., maps or safety specifications). CORE uses a vision-language model (VLM) to continuously reason about context-dependent safety rules directly from visual observations, grounds these rules in the physical environment, and enforces the resulting spatially-defined safe sets via control barrier functions. We provide probabilistic safety guarantees for CORE that account for perceptual uncertainty, and we demonstrate through simulation and real-world experiments that CORE enforces co...

Related Articles

[2601.07855] RoAD Benchmark: How LiDAR Models Fail under Coupled Domain Shifts and Label Evolution
Machine Learning

[2601.07855] RoAD Benchmark: How LiDAR Models Fail under Coupled Domain Shifts and Label Evolution

Abstract page for arXiv paper 2601.07855: RoAD Benchmark: How LiDAR Models Fail under Coupled Domain Shifts and Label Evolution

arXiv - AI · 3 min ·
[2502.00262] INSIGHT: Enhancing Autonomous Driving Safety through Vision-Language Models on Context-Aware Hazard Detection and Edge Case Evaluation
Llms

[2502.00262] INSIGHT: Enhancing Autonomous Driving Safety through Vision-Language Models on Context-Aware Hazard Detection and Edge Case Evaluation

Abstract page for arXiv paper 2502.00262: INSIGHT: Enhancing Autonomous Driving Safety through Vision-Language Models on Context-Aware Ha...

arXiv - AI · 4 min ·
[2508.00500] ProbGuard: Probabilistic Runtime Monitoring for LLM Agent Safety
Llms

[2508.00500] ProbGuard: Probabilistic Runtime Monitoring for LLM Agent Safety

Abstract page for arXiv paper 2508.00500: ProbGuard: Probabilistic Runtime Monitoring for LLM Agent Safety

arXiv - AI · 4 min ·
[2603.26660] Ruka-v2: Tendon Driven Open-Source Dexterous Hand with Wrist and Abduction for Robot Learning
Robotics

[2603.26660] Ruka-v2: Tendon Driven Open-Source Dexterous Hand with Wrist and Abduction for Robot Learning

Abstract page for arXiv paper 2603.26660: Ruka-v2: Tendon Driven Open-Source Dexterous Hand with Wrist and Abduction for Robot Learning

arXiv - AI · 4 min ·
More in Robotics: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime