[2602.22288] Reliable XAI Explanations in Sudden Cardiac Death Prediction for Chagas Cardiomyopathy

[2602.22288] Reliable XAI Explanations in Sudden Cardiac Death Prediction for Chagas Cardiomyopathy

arXiv - Machine Learning 4 min read Article

Summary

This article discusses a novel explainable AI (XAI) method for predicting sudden cardiac death in Chagas cardiomyopathy, emphasizing its accuracy and reliability.

Why It Matters

The ability to predict sudden cardiac death in patients with Chagas cardiomyopathy is crucial for improving patient outcomes. This research addresses the transparency issues in AI models, enhancing trust and facilitating clinical adoption, especially in endemic regions.

Key Takeaways

  • Introduces a logic-based explainability method for AI in healthcare.
  • Achieves over 95% accuracy and recall in predicting sudden cardiac death.
  • Demonstrates 100% explanation fidelity, enhancing trust in AI decisions.
  • Outperforms heuristic methods in consistency and robustness.
  • Supports the integration of AI tools in clinical practice, particularly in endemic areas.

Computer Science > Machine Learning arXiv:2602.22288 (cs) [Submitted on 25 Feb 2026] Title:Reliable XAI Explanations in Sudden Cardiac Death Prediction for Chagas Cardiomyopathy Authors:Vinícius P. Chagas, Luiz H. T. Viana, Mac M. da S. Carlos, João P. V. Madeiro, Roberto C. Pedrosa, Thiago Alves Rocha, Carlos H. L. Cavalcante View a PDF of the paper titled Reliable XAI Explanations in Sudden Cardiac Death Prediction for Chagas Cardiomyopathy, by Vin\'icius P. Chagas and 5 other authors View PDF HTML (experimental) Abstract:Sudden cardiac death (SCD) is unpredictable, and its prediction in Chagas cardiomyopathy (CC) remains a significant challenge, especially in patients not classified as high risk. While AI and machine learning models improve risk stratification, their adoption is hindered by a lack of transparency, as they are often perceived as \textit{black boxes} with unclear decision-making processes. Some approaches apply heuristic explanations without correctness guarantees, leading to mistakes in the decision-making process. To address this, we apply a logic-based explainability method with correctness guarantees to the problem of SCD prediction in CC. This explainability method, applied to an AI classifier with over 95\% accuracy and recall, demonstrated strong predictive performance and 100\% explanation fidelity. When compared to state-of-the-art heuristic methods, it showed superior consistency and robustness. This approach enhances clinical trust, facilitates...

Related Articles

Nomadic raises $8.4 million to wrangle the data pouring off autonomous vehicles | TechCrunch
Machine Learning

Nomadic raises $8.4 million to wrangle the data pouring off autonomous vehicles | TechCrunch

The company turns footage from robots into structured, searchable datasets with a deep learning model.

TechCrunch - AI · 6 min ·
Machine Learning

[D] Applied AI/Machine learning course by Srikanth Varma

I have all 10 modules of this course, along with all the notes, assignments, and solutions. If anyone need this course DM me. submitted b...

Reddit - Machine Learning · 1 min ·
Art schools are being torn apart by AI | The Verge
Machine Learning

Art schools are being torn apart by AI | The Verge

Many students and faculty members are opposed to using the technology, but art schools are plowing ahead with teaching AI tools regardless.

The Verge - AI · 9 min ·
AI Has Flooded All the Weather Apps | WIRED
Machine Learning

AI Has Flooded All the Weather Apps | WIRED

Weather forecasting has gotten a big boost from machine learning. How that translates into what users see can vary.

Wired - AI · 8 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime