[2601.12138] DriveSafe: A Hierarchical Risk Taxonomy for Safety-Critical LLM-Based Driving Assistants
About this article
Abstract page for arXiv paper 2601.12138: DriveSafe: A Hierarchical Risk Taxonomy for Safety-Critical LLM-Based Driving Assistants
Computer Science > Artificial Intelligence arXiv:2601.12138 (cs) [Submitted on 17 Jan 2026 (v1), last revised 23 Mar 2026 (this version, v3)] Title:DriveSafe: A Hierarchical Risk Taxonomy for Safety-Critical LLM-Based Driving Assistants Authors:Abhishek Kumar, Riya Tapwal, Carsten Maple View a PDF of the paper titled DriveSafe: A Hierarchical Risk Taxonomy for Safety-Critical LLM-Based Driving Assistants, by Abhishek Kumar and 2 other authors View PDF HTML (experimental) Abstract:Large Language Models (LLMs) are increasingly integrated into vehicle-based digital assistants, where unsafe, ambiguous, or legally incorrect responses can lead to serious safety, ethical, and regulatory consequences. Despite growing interest in LLM safety, existing taxonomies and evaluation frameworks remain largely general-purpose and fail to capture the domain-specific risks inherent to real-world driving scenarios. In this paper, we introduce DriveSafe, a hierarchical, four-level risk taxonomy designed to systematically characterize safety-critical failure modes of LLM-based driving assistants. The taxonomy comprises 129 fine-grained atomic risk categories spanning technical, legal, societal, and ethical dimensions, grounded in real-world driving regulations and safety principles and reviewed by domain experts. To validate the safety relevance and realism of the constructed prompts, we evaluate their refusal behavior across six widely deployed LLMs. Our analysis shows that the evaluated models...