[2603.20607] Towards Practical World Model-based Reinforcement Learning for Vision-Language-Action Models
About this article
Abstract page for arXiv paper 2603.20607: Towards Practical World Model-based Reinforcement Learning for Vision-Language-Action Models
Computer Science > Robotics arXiv:2603.20607 (cs) [Submitted on 21 Mar 2026] Title:Towards Practical World Model-based Reinforcement Learning for Vision-Language-Action Models Authors:Zhilong Zhang, Haoxiang Ren, Yihao Sun, Yifei Sheng, Haonan Wang, Haoxin Lin, Zhichao Wu, Pierre-Luc Bacon, Yang Yu View a PDF of the paper titled Towards Practical World Model-based Reinforcement Learning for Vision-Language-Action Models, by Zhilong Zhang and 8 other authors View PDF HTML (experimental) Abstract:Vision-Language-Action (VLA) models show strong generalization for robotic control, but finetuning them with reinforcement learning (RL) is constrained by the high cost and safety risks of real-world interaction. Training VLA models in interactive world models avoids these issues but introduces several challenges, including pixel-level world modeling, multi-view consistency, and compounding errors under sparse rewards. Building on recent advances across large multimodal models and model-based RL, we propose VLA-MBPO, a practical framework to tackle these problems in VLA finetuning. Our approach has three key design choices: (i) adapting unified multimodal models (UMMs) for data-efficient world modeling; (ii) an interleaved view decoding mechanism to enforce multi-view consistency; and (iii) chunk-level branched rollout to mitigate error compounding. Theoretical analysis and experiments across simulation and real-world tasks demonstrate that VLA-MBPO significantly improves policy per...