[2603.25740] Drive My Way: Preference Alignment of Vision-Language-Action Model for Personalized Driving
About this article
Abstract page for arXiv paper 2603.25740: Drive My Way: Preference Alignment of Vision-Language-Action Model for Personalized Driving
Computer Science > Robotics arXiv:2603.25740 (cs) [Submitted on 26 Mar 2026] Title:Drive My Way: Preference Alignment of Vision-Language-Action Model for Personalized Driving Authors:Zehao Wang, Huaide Jiang, Shuaiwu Dong, Yuping Wang, Hang Qiu, Jiachen Li View a PDF of the paper titled Drive My Way: Preference Alignment of Vision-Language-Action Model for Personalized Driving, by Zehao Wang and 5 other authors View PDF HTML (experimental) Abstract:Human driving behavior is inherently personal, which is shaped by long-term habits and influenced by short-term intentions. Individuals differ in how they accelerate, brake, merge, yield, and overtake across diverse situations. However, existing end-to-end autonomous driving systems either optimize for generic objectives or rely on fixed driving modes, lacking the ability to adapt to individual preferences or interpret natural language intent. To address this gap, we propose Drive My Way (DMW), a personalized Vision-Language-Action (VLA) driving framework that aligns with users' long-term driving habits and adapts to real-time user instructions. DMW learns a user embedding from our personalized driving dataset collected across multiple real drivers and conditions the policy on this embedding during planning, while natural language instructions provide additional short-term guidance. Closed-loop evaluation on the Bench2Drive benchmark demonstrates that DMW improves style instruction adaptation, and user studies show that its gene...