[2602.19983] Contextual Safety Reasoning and Grounding for Open-World Robots
Summary
The paper presents CORE, a novel safety framework for open-world robots that enables contextual reasoning and enforcement of safety rules based on real-time visual observations.
Why It Matters
As robots increasingly operate in dynamic environments, ensuring their safety through adaptable strategies is crucial. CORE addresses the limitations of traditional safety methods by allowing robots to reason about their surroundings in real-time, thereby enhancing their operational safety and effectiveness in unpredictable contexts.
Key Takeaways
- CORE framework allows online contextual reasoning for robot safety.
- Utilizes vision-language models to derive safety rules from visual data.
- Offers probabilistic safety guarantees despite perceptual uncertainties.
- Demonstrates superior performance compared to traditional semantic safety methods.
- Validates the importance of spatial grounding in enforcing contextual safety.
Computer Science > Robotics arXiv:2602.19983 (cs) [Submitted on 23 Feb 2026] Title:Contextual Safety Reasoning and Grounding for Open-World Robots Authors:Zachary Ravichadran, David Snyder, Alexander Robey, Hamed Hassani, Vijay Kumar, George J. Pappas View a PDF of the paper titled Contextual Safety Reasoning and Grounding for Open-World Robots, by Zachary Ravichadran and 5 other authors View PDF HTML (experimental) Abstract:Robots are increasingly operating in open-world environments where safe behavior depends on context: the same hallway may require different navigation strategies when crowded versus empty, or during an emergency versus normal operations. Traditional safety approaches enforce fixed constraints in user-specified contexts, limiting their ability to handle the open-ended contextual variability of real-world deployment. We address this gap via CORE, a safety framework that enables online contextual reasoning, grounding, and enforcement without prior knowledge of the environment (e.g., maps or safety specifications). CORE uses a vision-language model (VLM) to continuously reason about context-dependent safety rules directly from visual observations, grounds these rules in the physical environment, and enforces the resulting spatially-defined safe sets via control barrier functions. We provide probabilistic safety guarantees for CORE that account for perceptual uncertainty, and we demonstrate through simulation and real-world experiments that CORE enforces co...