[2602.22056] FlowCorrect: Efficient Interactive Correction of Generative Flow Policies for Robotic Manipulation
Summary
The paper presents FlowCorrect, a framework for correcting generative flow policies in robotic manipulation using minimal human input, improving task success rates significantly.
Why It Matters
FlowCorrect addresses the challenge of deployment-time failures in robotic systems by enabling real-time human corrections, enhancing the reliability and efficiency of robotic manipulation tasks. This innovation is crucial for advancing human-robot interaction and practical applications in robotics.
Key Takeaways
- FlowCorrect allows for interactive corrections of robotic policies with minimal human input.
- The framework improves success rates on challenging tasks by up to 85%.
- It preserves the performance of previously learned scenarios while adapting to new corrections.
- The approach is sample-efficient, requiring few demonstrations for effective learning.
- FlowCorrect enhances human-robot collaboration in real-world applications.
Computer Science > Robotics arXiv:2602.22056 (cs) [Submitted on 25 Feb 2026] Title:FlowCorrect: Efficient Interactive Correction of Generative Flow Policies for Robotic Manipulation Authors:Edgar Welte, Yitian Shi, Rosa Wolf, Maximillian Gilles, Rania Rayyes View a PDF of the paper titled FlowCorrect: Efficient Interactive Correction of Generative Flow Policies for Robotic Manipulation, by Edgar Welte and 4 other authors View PDF HTML (experimental) Abstract:Generative manipulation policies can fail catastrophically under deployment-time distribution shift, yet many failures are near-misses: the robot reaches almost-correct poses and would succeed with a small corrective motion. We present FlowCorrect, a deployment-time correction framework that converts near-miss failures into successes using sparse human nudges, without full policy retraining. During execution, a human provides brief corrective pose nudges via a lightweight VR interface. FlowCorrect uses these sparse corrections to locally adapt the policy, improving actions without retraining the backbone while preserving the model performance on previously learned scenarios. We evaluate on a real-world robot across three tabletop tasks: pick-and-place, pouring, and cup uprighting. With a low correction budget, FlowCorrect improves success on hard cases by 85\% while preserving performance on previously solved scenarios. The results demonstrate clearly that FlowCorrect learns only with very few demonstrations and enable...