[2602.13323] Contrastive explanations of BDI agents

[2602.13323] Contrastive explanations of BDI agents

arXiv - AI 3 min read Article

Summary

This article discusses the extension of Belief-Desire-Intention (BDI) agents to provide contrastive explanations, enhancing transparency and trust in autonomous systems.

Why It Matters

Understanding how BDI agents can effectively communicate their decision-making processes is crucial for building user trust in AI systems. This research highlights the importance of contrastive explanations, which may lead to better user comprehension and acceptance of AI behaviors.

Key Takeaways

  • Contrastive explanations help answer why an action was taken instead of another.
  • Using contrastive questions can significantly reduce the length of explanations.
  • Human evaluations suggest that contrastive answers may enhance trust and perceived understanding.
  • Providing explanations does not always yield positive outcomes; sometimes, less information is better.
  • The study emphasizes the need for effective communication strategies in AI systems.

Computer Science > Artificial Intelligence arXiv:2602.13323 (cs) [Submitted on 10 Feb 2026] Title:Contrastive explanations of BDI agents Authors:Michael Winikoff View a PDF of the paper titled Contrastive explanations of BDI agents, by Michael Winikoff View PDF HTML (experimental) Abstract:The ability of autonomous systems to provide explanations is important for supporting transparency and aiding the development of (appropriate) trust. Prior work has defined a mechanism for Belief-Desire-Intention (BDI) agents to be able to answer questions of the form ``why did you do action $X$?''. However, we know that we ask \emph{contrastive} questions (``why did you do $X$ \emph{instead of} $F$?''). We therefore extend previous work to be able to answer such questions. A computational evaluation shows that using contrastive questions yields a significant reduction in explanation length. A human subject evaluation was conducted to assess whether such contrastive answers are preferred, and how well they support trust development and transparency. We found some evidence for contrastive answers being preferred, and some evidence that they led to higher trust, perceived understanding, and confidence in the system's correctness. We also evaluated the benefit of providing explanations at all. Surprisingly, there was not a clear benefit, and in some situations we found evidence that providing a (full) explanation was worse than not providing any explanation. Comments: Subjects: Artificial I...

Related Articles

Robotics

SMASH2000, an AI-powered optic that turns an AR-15 into an anti-drone platform

submitted by /u/Sgt_Gram [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
Nomadic raises $8.4 million to wrangle the data pouring off autonomous vehicles | TechCrunch
Machine Learning

Nomadic raises $8.4 million to wrangle the data pouring off autonomous vehicles | TechCrunch

The company turns footage from robots into structured, searchable datasets with a deep learning model.

TechCrunch - AI · 6 min ·
Machine Learning

The AI Chip War is Just Getting Started

Everyone talks about AI models, but the real bottleneck might be hardware. According to a recent study by Roots Analysis: AI chip market ...

Reddit - Artificial Intelligence · 1 min ·
Robotics

What happens when AI agents can earn and spend real money? I built a small test to find out

I've been sitting with a question for a while: what happens when AI agents aren't just tools to be used, but participants in an economy? ...

Reddit - Artificial Intelligence · 1 min ·
More in Robotics: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime