[2602.13329] HiST-VLA: A Hierarchical Spatio-Temporal Vision-Language-Action Model for End-to-End Autonomous Driving
Summary
The HiST-VLA model enhances autonomous driving by integrating vision, language, and action through improved spatio-temporal reasoning and computational efficiency.
Why It Matters
This research addresses critical limitations in existing Vision-Language-Action models for autonomous driving, such as spatial awareness and context sensitivity. By proposing a hierarchical model, it aims to improve trajectory generation, which is essential for the safety and reliability of autonomous vehicles in real-world scenarios.
Key Takeaways
- HiST-VLA improves 3D spatial and temporal reasoning for autonomous driving.
- Dynamic token sparsification enhances computational efficiency without sacrificing performance.
- The hierarchical transformer-based planner refines trajectory generation using language commands.
Computer Science > Computer Vision and Pattern Recognition arXiv:2602.13329 (cs) [Submitted on 11 Feb 2026] Title:HiST-VLA: A Hierarchical Spatio-Temporal Vision-Language-Action Model for End-to-End Autonomous Driving Authors:Yiru Wang, Zichong Gu, Yu Gao, Anqing Jiang, Zhigang Sun, Shuo Wang, Yuwen Heng, Hao Sun View a PDF of the paper titled HiST-VLA: A Hierarchical Spatio-Temporal Vision-Language-Action Model for End-to-End Autonomous Driving, by Yiru Wang and 7 other authors View PDF HTML (experimental) Abstract:Vision-Language-Action (VLA) models offer promising capabilities for autonomous driving through multimodal understanding. However, their utilization in safety-critical scenarios is constrained by inherent limitations, including imprecise numerical reasoning, weak 3D spatial awareness, and high sensitivity to context. To address these challenges, we propose HiST-VLA, a novel Hierarchical Spatio-Temporal VLA model designed for reliable trajectory generation. Our framework enhances 3D spatial and temporal reasoning by integrating geometric awareness with fine-grained driving commands and state history prompting. To ensure computational efficiency, we integrate dynamic token sparsification into the VLA architecture. This approach fuses redundant tokens rather than filtering them, effectively reducing redundancy without sacrificing model performance. Furthermore, we employ a hierarchical transformer-based planner to progressively refine coarse VLA waypoints into fin...