[2505.16377] VLM-SAFE: Vision-Language Model-Guided Safety-Aware Reinforcement Learning with World Models for Autonomous Driving
About this article
Abstract page for arXiv paper 2505.16377: VLM-SAFE: Vision-Language Model-Guided Safety-Aware Reinforcement Learning with World Models for Autonomous Driving
Computer Science > Robotics arXiv:2505.16377 (cs) [Submitted on 22 May 2025 (v1), last revised 28 Mar 2026 (this version, v2)] Title:VLM-SAFE: Vision-Language Model-Guided Safety-Aware Reinforcement Learning with World Models for Autonomous Driving Authors:Yansong Qu, Zilin Huang, Zihao Sheng, Jiancong Chen, Yue Leng, Samuel Labi, Sikai Chen View a PDF of the paper titled VLM-SAFE: Vision-Language Model-Guided Safety-Aware Reinforcement Learning with World Models for Autonomous Driving, by Yansong Qu and 6 other authors View PDF Abstract:Autonomous driving policy learning with reinforcement learning (RL) is fundamentally limited by low sample efficiency, weak generalization, and a dependence on unsafe online trial-and-error interactions. Although safe RL introduces explicit constraints or costs, existing methods often fail to capture the semantic meaning of safety in real driving scenes, leading to conservative behaviors in simple cases and insufficient risk awareness in complex ones. To address this issue, we propose VLM-SAFE, an offline safe RL framework that follows a human cognitive loop of observe-imagine-evaluate-act. Starting from offline driving data, VLM-SAFE observes traffic scenarios and leverages a vision-language model (VLM) to provide semantic safety signals grounded in scene understanding. A learned world model then imagines future trajectories from the observed context, enabling the agent to reason about possible consequences without interacting with the re...