[2602.13810] Mean Flow Policy with Instantaneous Velocity Constraint for One-step Action Generation
Summary
The paper introduces the Mean Velocity Policy (MVP) for reinforcement learning, which enhances one-step action generation by modeling the mean velocity field with an instantaneous velocity constraint, achieving superior performance in robotic tasks.
Why It Matters
This research addresses the trade-off between expressiveness and computational efficiency in reinforcement learning policies. By proposing a novel approach that ensures high expressiveness while improving training and inference speed, it contributes significantly to advancements in robotic manipulation tasks, which are critical for real-world applications.
Key Takeaways
- Mean Velocity Policy (MVP) improves one-step action generation in RL.
- Introduces an instantaneous velocity constraint to enhance expressiveness.
- Demonstrates state-of-the-art performance in robotic manipulation tasks.
- Offers significant improvements in training and inference speed.
- Addresses the balance between expressiveness and computational burden.
Computer Science > Machine Learning arXiv:2602.13810 (cs) [Submitted on 14 Feb 2026] Title:Mean Flow Policy with Instantaneous Velocity Constraint for One-step Action Generation Authors:Guojian Zhan, Letian Tao, Pengcheng Wang, Yixiao Wang, Yiheng Li, Yuxin Chen, Masayoshi Tomizuka, Shengbo Eben Li View a PDF of the paper titled Mean Flow Policy with Instantaneous Velocity Constraint for One-step Action Generation, by Guojian Zhan and 7 other authors View PDF HTML (experimental) Abstract:Learning expressive and efficient policy functions is a promising direction in reinforcement learning (RL). While flow-based policies have recently proven effective in modeling complex action distributions with a fast deterministic sampling process, they still face a trade-off between expressiveness and computational burden, which is typically controlled by the number of flow steps. In this work, we propose mean velocity policy (MVP), a new generative policy function that models the mean velocity field to achieve the fastest one-step action generation. To ensure its high expressiveness, an instantaneous velocity constraint (IVC) is introduced on the mean velocity field during training. We theoretically prove that this design explicitly serves as a crucial boundary condition, thereby improving learning accuracy and enhancing policy expressiveness. Empirically, our MVP achieves state-of-the-art success rates across several challenging robotic manipulation tasks from Robomimic and OGBench. It...