[2602.13248] X-Blocks: Linguistic Building Blocks of Natural Language Explanations for Automated Vehicles
Summary
The paper introduces X-Blocks, a framework for analyzing natural language explanations in automated vehicles, enhancing user trust and understanding through structured linguistic elements.
Why It Matters
As automated vehicles become more prevalent, understanding how to effectively communicate their decision-making processes is crucial for user acceptance. This framework provides a systematic approach to creating transparent and trustworthy explanations, which can significantly impact the integration of AVs in society.
Key Takeaways
- X-Blocks framework identifies linguistic building blocks of explanations at context, syntax, and lexicon levels.
- RACE framework achieves high accuracy in classifying explanations into scenario-aware categories.
- The findings support the design of user-friendly explanations that enhance trust in automated driving systems.
- The framework is adaptable to various datasets and safety-critical applications beyond automated vehicles.
- Evidence-based principles derived from the study can guide future research in AI explanation generation.
Computer Science > Artificial Intelligence arXiv:2602.13248 (cs) [Submitted on 2 Feb 2026] Title:X-Blocks: Linguistic Building Blocks of Natural Language Explanations for Automated Vehicles Authors:Ashkan Y. Zadeh, Xiaomeng Li, Andry Rakotonirainy, Ronald Schroeter, Sebastien Glaser, Zishuo Zhu View a PDF of the paper titled X-Blocks: Linguistic Building Blocks of Natural Language Explanations for Automated Vehicles, by Ashkan Y. Zadeh and 5 other authors View PDF HTML (experimental) Abstract:Natural language explanations play a critical role in establishing trust and acceptance of automated vehicles (AVs), yet existing approaches lack systematic frameworks for analysing how humans linguistically construct driving rationales across diverse scenarios. This paper introduces X-Blocks (eXplanation Blocks), a hierarchical analytical framework that identifies the linguistic building blocks of natural language explanations for AVs at three levels: context, syntax, and lexicon. At the context level, we propose RACE (Reasoning-Aligned Classification of Explanations), a multi-LLM ensemble framework that combines Chain-of-Thought reasoning with self-consistency mechanisms to robustly classify explanations into 32 scenario-aware categories. Applied to human-authored explanations from the Berkeley DeepDrive-X dataset, RACE achieves 91.45 percent accuracy and a Cohens kappa of 0.91 against cases with human annotator agreement, indicating near-human reliability for context classification...