[2604.06211] Illocutionary Explanation Planning for Source-Faithful Explanations in Retrieval-Augmented Language Models
About this article
Abstract page for arXiv paper 2604.06211: Illocutionary Explanation Planning for Source-Faithful Explanations in Retrieval-Augmented Language Models
Computer Science > Computation and Language arXiv:2604.06211 (cs) [Submitted on 16 Mar 2026] Title:Illocutionary Explanation Planning for Source-Faithful Explanations in Retrieval-Augmented Language Models Authors:Francesco Sovrano, Alberto Bacchelli View a PDF of the paper titled Illocutionary Explanation Planning for Source-Faithful Explanations in Retrieval-Augmented Language Models, by Francesco Sovrano and Alberto Bacchelli View PDF HTML (experimental) Abstract:Natural language explanations produced by large language models (LLMs) are often persuasive, but not necessarily scrutable: users cannot easily verify whether the claims in an explanation are supported by evidence. In XAI, this motivates a focus on faithfulness and traceability, i.e., the extent to which an explanation's claims can be grounded in, and traced back to, an explicit source. We study these desiderata in retrieval-augmented generation (RAG) for programming education, where textbooks provide authoritative evidence. We benchmark six LLMs on 90 Stack Overflow questions grounded in three programming textbooks and quantify source faithfulness via source adherence metrics. We find that non Retrieval-Augmented Generation (RAG) models have median source adherence of 0%, while baseline RAG systems still exhibit low median adherence (22-40%, depending on the model). Motivated by Achinstein's illocutionary theory of explanation, we introduce illocutionary macro-planning as a descriptive design principle for sou...