[2505.15925] VERDI: VLM-Embedded Reasoning for Autonomous Driving
About this article
Abstract page for arXiv paper 2505.15925: VERDI: VLM-Embedded Reasoning for Autonomous Driving
Computer Science > Robotics arXiv:2505.15925 (cs) [Submitted on 21 May 2025 (v1), last revised 3 Apr 2026 (this version, v4)] Title:VERDI: VLM-Embedded Reasoning for Autonomous Driving Authors:Bowen Feng, Zhiting Mei, Julian Ost, Filippo Ghilotti, Baiang Li, Roger Girgis, Anirudha Majumdar, Felix Heide View a PDF of the paper titled VERDI: VLM-Embedded Reasoning for Autonomous Driving, by Bowen Feng and 7 other authors View PDF HTML (experimental) Abstract:While autonomous driving (AD) stacks struggle with decision making under partial observability and real-world complexity, human drivers are capable of applying commonsense reasoning to make near-optimal decisions with limited information. Recent work has attempted to leverage finetuned Vision-Language Models (VLMs) for trajectory planning at inference time to emulate human behavior. Despite their success in benchmark evaluations, these methods are often impractical to deploy (a 70B parameter VLM inference at merely 8 tokens per second requires more than 160G of memory), and their monolithic network structure prohibits safety decomposition. To bridge this gap, we propose VLM-Embedded Reasoning for autonomous DrIving (VERDI), a training-time framework that distills the reasoning process and commonsense knowledge of VLMs into the AD stack. VERDI augments modular differentiable end-to-end (e2e) AD models by aligning intermediate module outputs at the perception, prediction, and planning stages with text features explaining t...