[2602.22056] FlowCorrect: Efficient Interactive Correction of Generative Flow Policies for Robotic Manipulation

[2602.22056] FlowCorrect: Efficient Interactive Correction of Generative Flow Policies for Robotic Manipulation

arXiv - Machine Learning 3 min read Article

Summary

The paper presents FlowCorrect, a framework for correcting generative flow policies in robotic manipulation using minimal human input, improving task success rates significantly.

Why It Matters

FlowCorrect addresses the challenge of deployment-time failures in robotic systems by enabling real-time human corrections, enhancing the reliability and efficiency of robotic manipulation tasks. This innovation is crucial for advancing human-robot interaction and practical applications in robotics.

Key Takeaways

  • FlowCorrect allows for interactive corrections of robotic policies with minimal human input.
  • The framework improves success rates on challenging tasks by up to 85%.
  • It preserves the performance of previously learned scenarios while adapting to new corrections.
  • The approach is sample-efficient, requiring few demonstrations for effective learning.
  • FlowCorrect enhances human-robot collaboration in real-world applications.

Computer Science > Robotics arXiv:2602.22056 (cs) [Submitted on 25 Feb 2026] Title:FlowCorrect: Efficient Interactive Correction of Generative Flow Policies for Robotic Manipulation Authors:Edgar Welte, Yitian Shi, Rosa Wolf, Maximillian Gilles, Rania Rayyes View a PDF of the paper titled FlowCorrect: Efficient Interactive Correction of Generative Flow Policies for Robotic Manipulation, by Edgar Welte and 4 other authors View PDF HTML (experimental) Abstract:Generative manipulation policies can fail catastrophically under deployment-time distribution shift, yet many failures are near-misses: the robot reaches almost-correct poses and would succeed with a small corrective motion. We present FlowCorrect, a deployment-time correction framework that converts near-miss failures into successes using sparse human nudges, without full policy retraining. During execution, a human provides brief corrective pose nudges via a lightweight VR interface. FlowCorrect uses these sparse corrections to locally adapt the policy, improving actions without retraining the backbone while preserving the model performance on previously learned scenarios. We evaluate on a real-world robot across three tabletop tasks: pick-and-place, pouring, and cup uprighting. With a low correction budget, FlowCorrect improves success on hard cases by 85\% while preserving performance on previously solved scenarios. The results demonstrate clearly that FlowCorrect learns only with very few demonstrations and enable...

Related Articles

Machine Learning

[P] Unix philosophy for ML pipelines: modular, swappable stages with typed contracts

We built an open-source prototype that applies Unix philosophy to retrieval pipelines. Each stage (PII redaction, chunking, dedup, embedd...

Reddit - Machine Learning · 1 min ·
Machine Learning

Making an AI native sovereign computational stack

I’ve been working on a personal project that ended up becoming a kind of full computing stack: identity / trust protocol decentralized ch...

Reddit - Artificial Intelligence · 1 min ·
Llms

An attack class that passes every current LLM filter - no payload, no injection signature, no log trace

https://shapingrooms.com/research I published a paper today on something I've been calling postural manipulation. The short version: ordi...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

What tools are sr MLEs using? (clawdbot, openspec, wispr) [D]

I'm already blasting cursor, but I want to level up my output. I heard that these kind of AI tools and workflows are being asked in SF. W...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime