[2602.13323] Contrastive explanations of BDI agents
Summary
This article discusses the extension of Belief-Desire-Intention (BDI) agents to provide contrastive explanations, enhancing transparency and trust in autonomous systems.
Why It Matters
Understanding how BDI agents can effectively communicate their decision-making processes is crucial for building user trust in AI systems. This research highlights the importance of contrastive explanations, which may lead to better user comprehension and acceptance of AI behaviors.
Key Takeaways
- Contrastive explanations help answer why an action was taken instead of another.
- Using contrastive questions can significantly reduce the length of explanations.
- Human evaluations suggest that contrastive answers may enhance trust and perceived understanding.
- Providing explanations does not always yield positive outcomes; sometimes, less information is better.
- The study emphasizes the need for effective communication strategies in AI systems.
Computer Science > Artificial Intelligence arXiv:2602.13323 (cs) [Submitted on 10 Feb 2026] Title:Contrastive explanations of BDI agents Authors:Michael Winikoff View a PDF of the paper titled Contrastive explanations of BDI agents, by Michael Winikoff View PDF HTML (experimental) Abstract:The ability of autonomous systems to provide explanations is important for supporting transparency and aiding the development of (appropriate) trust. Prior work has defined a mechanism for Belief-Desire-Intention (BDI) agents to be able to answer questions of the form ``why did you do action $X$?''. However, we know that we ask \emph{contrastive} questions (``why did you do $X$ \emph{instead of} $F$?''). We therefore extend previous work to be able to answer such questions. A computational evaluation shows that using contrastive questions yields a significant reduction in explanation length. A human subject evaluation was conducted to assess whether such contrastive answers are preferred, and how well they support trust development and transparency. We found some evidence for contrastive answers being preferred, and some evidence that they led to higher trust, perceived understanding, and confidence in the system's correctness. We also evaluated the benefit of providing explanations at all. Surprisingly, there was not a clear benefit, and in some situations we found evidence that providing a (full) explanation was worse than not providing any explanation. Comments: Subjects: Artificial I...