[2602.22288] Reliable XAI Explanations in Sudden Cardiac Death Prediction for Chagas Cardiomyopathy
Summary
This article discusses a novel explainable AI (XAI) method for predicting sudden cardiac death in Chagas cardiomyopathy, emphasizing its accuracy and reliability.
Why It Matters
The ability to predict sudden cardiac death in patients with Chagas cardiomyopathy is crucial for improving patient outcomes. This research addresses the transparency issues in AI models, enhancing trust and facilitating clinical adoption, especially in endemic regions.
Key Takeaways
- Introduces a logic-based explainability method for AI in healthcare.
- Achieves over 95% accuracy and recall in predicting sudden cardiac death.
- Demonstrates 100% explanation fidelity, enhancing trust in AI decisions.
- Outperforms heuristic methods in consistency and robustness.
- Supports the integration of AI tools in clinical practice, particularly in endemic areas.
Computer Science > Machine Learning arXiv:2602.22288 (cs) [Submitted on 25 Feb 2026] Title:Reliable XAI Explanations in Sudden Cardiac Death Prediction for Chagas Cardiomyopathy Authors:Vinícius P. Chagas, Luiz H. T. Viana, Mac M. da S. Carlos, João P. V. Madeiro, Roberto C. Pedrosa, Thiago Alves Rocha, Carlos H. L. Cavalcante View a PDF of the paper titled Reliable XAI Explanations in Sudden Cardiac Death Prediction for Chagas Cardiomyopathy, by Vin\'icius P. Chagas and 5 other authors View PDF HTML (experimental) Abstract:Sudden cardiac death (SCD) is unpredictable, and its prediction in Chagas cardiomyopathy (CC) remains a significant challenge, especially in patients not classified as high risk. While AI and machine learning models improve risk stratification, their adoption is hindered by a lack of transparency, as they are often perceived as \textit{black boxes} with unclear decision-making processes. Some approaches apply heuristic explanations without correctness guarantees, leading to mistakes in the decision-making process. To address this, we apply a logic-based explainability method with correctness guarantees to the problem of SCD prediction in CC. This explainability method, applied to an AI classifier with over 95\% accuracy and recall, demonstrated strong predictive performance and 100\% explanation fidelity. When compared to state-of-the-art heuristic methods, it showed superior consistency and robustness. This approach enhances clinical trust, facilitates...