[2602.13985] Bridging AI and Clinical Reasoning: Abductive Explanations for Alignment on Critical Symptoms
Summary
This article discusses the integration of AI in clinical diagnostics, focusing on the use of abductive explanations to enhance AI's alignment with clinical reasoning and improve trust and interpretability in medical decision-making.
Why It Matters
As AI becomes increasingly prevalent in healthcare, ensuring that AI systems align with clinical reasoning is crucial for their acceptance and effectiveness. This research addresses the gap in AI's interpretability, which is vital for clinicians to trust AI-generated insights and improve patient outcomes.
Key Takeaways
- AI's diagnostic accuracy can match or exceed human experts.
- Current AI models often fail to align with structured clinical frameworks.
- Abductive explanations can enhance the interpretability of AI decisions.
- The proposed framework maintains predictive accuracy while improving clinical insights.
- Trustworthy AI in healthcare is essential for broader adoption and effective patient care.
Computer Science > Artificial Intelligence arXiv:2602.13985 (cs) [Submitted on 15 Feb 2026] Title:Bridging AI and Clinical Reasoning: Abductive Explanations for Alignment on Critical Symptoms Authors:Belona Sonna, Alban Grastien View a PDF of the paper titled Bridging AI and Clinical Reasoning: Abductive Explanations for Alignment on Critical Symptoms, by Belona Sonna and Alban Grastien View PDF HTML (experimental) Abstract:Artificial intelligence (AI) has demonstrated strong potential in clinical diagnostics, often achieving accuracy comparable to or exceeding that of human experts. A key challenge, however, is that AI reasoning frequently diverges from structured clinical frameworks, limiting trust, interpretability, and adoption. Critical symptoms, pivotal for rapid and accurate decision-making, may be overlooked by AI models even when predictions are correct. Existing post hoc explanation methods provide limited transparency and lack formal guarantees. To address this, we leverage formal abductive explanations, which offer consistent, guaranteed reasoning over minimal sufficient feature sets. This enables a clear understanding of AI decision-making and allows alignment with clinical reasoning. Our approach preserves predictive accuracy while providing clinically actionable insights, establishing a robust framework for trustworthy AI in medical diagnosis. Comments: Subjects: Artificial Intelligence (cs.AI) Cite as: arXiv:2602.13985 [cs.AI] (or arXiv:2602.13985v1 [cs.A...