[2602.12405] Self-Refining Vision Language Model for Robotic Failure Detection and Reasoning

[2602.12405] Self-Refining Vision Language Model for Robotic Failure Detection and Reasoning

arXiv - Machine Learning 4 min read Article

Summary

The paper presents ARMOR, a self-refining vision language model designed for robotic failure detection and reasoning, achieving significant performance improvements over existing methods.

Why It Matters

As robotics increasingly integrates into critical applications, reliable failure detection and reasoning become essential. ARMOR addresses the challenge of limited annotations and subtle failure modes, enhancing the robustness of robotic systems in real-world scenarios.

Key Takeaways

  • ARMOR improves failure detection rates by up to 30%.
  • The model utilizes heterogeneous supervision for enhanced learning.
  • It employs a multi-task self-refinement process for better reasoning.
  • ARMOR demonstrates robustness against predefined failure modes.
  • The approach combines offline and online imitation learning effectively.

Computer Science > Robotics arXiv:2602.12405 (cs) [Submitted on 12 Feb 2026] Title:Self-Refining Vision Language Model for Robotic Failure Detection and Reasoning Authors:Carl Qi, Xiaojie Wang, Silong Yong, Stephen Sheng, Huitan Mao, Sriram Srinivasan, Manikantan Nambi, Amy Zhang, Yesh Dattatreya View a PDF of the paper titled Self-Refining Vision Language Model for Robotic Failure Detection and Reasoning, by Carl Qi and 8 other authors View PDF HTML (experimental) Abstract:Reasoning about failures is crucial for building reliable and trustworthy robotic systems. Prior approaches either treat failure reasoning as a closed-set classification problem or assume access to ample human annotations. Failures in the real world are typically subtle, combinatorial, and difficult to enumerate, whereas rich reasoning labels are expensive to acquire. We address this problem by introducing ARMOR: Adaptive Round-based Multi-task mOdel for Robotic failure detection and reasoning. We formulate detection and reasoning as a multi-task self-refinement process, where the model iteratively predicts detection outcomes and natural language reasoning conditioned on past outputs. During training, ARMOR learns from heterogeneous supervision - large-scale sparse binary labels and small-scale rich reasoning annotations - optimized via a combination of offline and online imitation learning. At inference time, ARMOR generates multiple refinement trajectories and selects the most confident prediction via...

Related Articles

I Asked ChatGPT 500 Questions. Here Are the Ads I Saw Most Often | WIRED
Llms

I Asked ChatGPT 500 Questions. Here Are the Ads I Saw Most Often | WIRED

Ads are rolling out across the US on ChatGPT’s free tier. I asked OpenAI's bot 500 questions to see what these ads were like and how they...

Wired - AI · 9 min ·
Llms

Abacus.Ai Claw LLM consumes an incredible amount of credit without any usage :(

Three days ago, I clicked the "Deploy OpenClaw In Seconds" button to get an overview of the new service, but I didn't build any automatio...

Reddit - Artificial Intelligence · 1 min ·
Google’s Gemini AI app debuts in Hong Kong
Llms

Google’s Gemini AI app debuts in Hong Kong

Tech giant’s chatbot service tops Apple’s app store chart in the city.

AI Tools & Products · 2 min ·
Google Launches Gemini Import Tools to Poach Users From Rival AI Apps
Llms

Google Launches Gemini Import Tools to Poach Users From Rival AI Apps

Anyone looking to switch their AI assistant will find it surprisingly easy, as it only takes a few steps to move from A to B. This is not...

AI Tools & Products · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime