[2602.22149] Enhancing Framingham Cardiovascular Risk Score Transparency through Logic-Based XAI
Summary
This article presents a logic-based explainable AI model designed to enhance the transparency of the Framingham Cardiovascular Risk Score, improving understanding and actionable insights for clinicians.
Why It Matters
The Framingham Risk Score is a crucial tool in predicting cardiovascular disease risk, yet its lack of transparency limits its effectiveness. By introducing a logical explainer, this research aims to enhance clinician trust and facilitate better patient outcomes through clear, actionable insights.
Key Takeaways
- Introduces a logic-based explainer for the Framingham Risk Score.
- Enhances transparency in cardiovascular risk assessment.
- Identifies modifiable risk factors to improve patient outcomes.
- Supports clinical decision-making with actionable insights.
- Aims to increase trust in risk assessment tools among clinicians.
Computer Science > Logic in Computer Science arXiv:2602.22149 (cs) [Submitted on 25 Feb 2026] Title:Enhancing Framingham Cardiovascular Risk Score Transparency through Logic-Based XAI Authors:Emannuel L. de A. Bezerra, Luiz H. T. Viana, Vinícius P. Chagas, Diogo E. Rolim, Thiago Alves Rocha, Carlos H. L. Cavalcante View a PDF of the paper titled Enhancing Framingham Cardiovascular Risk Score Transparency through Logic-Based XAI, by Emannuel L. de A. Bezerra and Luiz H. T. Viana and Vin\'icius P. Chagas and Diogo E. Rolim and Thiago Alves Rocha and Carlos H. L. Cavalcante View PDF HTML (experimental) Abstract:Cardiovascular disease (CVD) remains one of the leading global health challenges, accounting for more than 19 million deaths worldwide. To address this, several tools that aim to predict CVD risk and support clinical decision making have been developed. In particular, the Framingham Risk Score (FRS) is one of the most widely used and recommended worldwide. However, it does not explain why a patient was assigned to a particular risk category nor how it can be reduced. Due to this lack of transparency, we present a logical explainer for the FRS. Based on first-order logic and explainable artificial intelligence (XAI) fundaments, the explainer is capable of identifying a minimal set of patient attributes that are sufficient to explain a given risk classification. Our explainer also produces actionable scenarios that illustrate which modifiable variables would reduce a pat...