[2604.09059] Learning Vision-Language-Action World Models for Autonomous Driving
About this article
Abstract page for arXiv paper 2604.09059: Learning Vision-Language-Action World Models for Autonomous Driving
Computer Science > Computer Vision and Pattern Recognition arXiv:2604.09059 (cs) [Submitted on 10 Apr 2026] Title:Learning Vision-Language-Action World Models for Autonomous Driving Authors:Guoqing Wang, Pin Tang, Xiangxuan Ren, Guodongfang Zhao, Bailan Feng, Chao Ma View a PDF of the paper titled Learning Vision-Language-Action World Models for Autonomous Driving, by Guoqing Wang and 5 other authors View PDF HTML (experimental) Abstract:Vision-Language-Action (VLA) models have recently achieved notable progress in end-to-end autonomous driving by integrating perception, reasoning, and control within a unified multimodal framework. However, they often lack explicit modeling of temporal dynamics and global world consistency, which limits their foresight and safety. In contrast, world models can simulate plausible future scenes but generally struggle to reason about or evaluate the imagined future they generate. In this work, we present VLA-World, a simple yet effective VLA world model that unifies predictive imagination with reflective reasoning to improve driving foresight. VLA-World first uses an action-derived feasible trajectory to guide the generation of the next-frame image, capturing rich spatial and temporal cues that describe how the surrounding environment evolves. The model then reasons over this self-generated future imagined frame to refine the predicted trajectory, achieving higher performance and better interpretability. To support this pipeline, we curate nu...