[2603.01469] Mean-Flow based One-Step Vision-Language-Action
About this article
Abstract page for arXiv paper 2603.01469: Mean-Flow based One-Step Vision-Language-Action
Computer Science > Robotics arXiv:2603.01469 (cs) [Submitted on 2 Mar 2026] Title:Mean-Flow based One-Step Vision-Language-Action Authors:Yang Chen, Xiaoguang Ma, Bin Zhao View a PDF of the paper titled Mean-Flow based One-Step Vision-Language-Action, by Yang Chen and 2 other authors View PDF HTML (experimental) Abstract:Recent advances in FlowMatching-based Vision-Language-Action (VLA) frameworks have demonstrated remarkable advantages in generating high-frequency action chunks, particularly for highly dexterous robotic manipulation tasks. Despite these notable achievements, their practical applications are constrained by prolonged generation latency, which stems from inherent iterative sampling requirements and architectural limitations. To address this critical bottleneck, we propose a Mean-Flow based One-Step VLA approach. Specifically, we resolve the noise-induced issues in the action generation process, thereby eliminating the consistency constraints inherent to conventional Flow-Matching methods. This significantly enhances generation efficiency and enables one-step action generation. Real-world robotic experiments show that the generation speed of the proposed Mean-Flow based One-Step VLA is 8.7 times and 83.9 times faster than that of SmolVLA and Diffusion Policy, respectively. These results elucidate its great potential as a high-efficiency backbone for VLA-based robotic manipulation. Subjects: Robotics (cs.RO); Artificial Intelligence (cs.AI) Cite as: arXiv:2603...