[2602.17364] A feature-stable and explainable machine learning framework for trustworthy decision-making under incomplete clinical data

[2602.17364] A feature-stable and explainable machine learning framework for trustworthy decision-making under incomplete clinical data

arXiv - AI 4 min read Article

Summary

This article presents CACTUS, a machine learning framework designed to enhance decision-making in clinical settings by ensuring feature stability and interpretability under incomplete data conditions.

Why It Matters

The framework addresses critical challenges in applying machine learning to biomedical data, such as robustness and interpretability, which are essential for trust in high-stakes medical decisions. By focusing on feature stability, CACTUS aims to improve the reliability of predictive models in clinical environments where data may be incomplete.

Key Takeaways

  • CACTUS framework enhances feature stability and interpretability in machine learning models for clinical data.
  • It demonstrates superior predictive performance while maintaining stability of key features under missing data conditions.
  • The framework is designed to support trustworthy decision-making in high-stakes biomedical applications.

Computer Science > Machine Learning arXiv:2602.17364 (cs) [Submitted on 19 Feb 2026] Title:A feature-stable and explainable machine learning framework for trustworthy decision-making under incomplete clinical data Authors:Justyna Andrys-Olek, Paulina Tworek, Luca Gherardini, Mark W. Ruddock, Mary Jo Kurt, Peter Fitzgerald, Jose Sousa View a PDF of the paper titled A feature-stable and explainable machine learning framework for trustworthy decision-making under incomplete clinical data, by Justyna Andrys-Olek and Paulina Tworek and Luca Gherardini and Mark W. Ruddock and Mary Jo Kurt and Peter Fitzgerald and Jose Sousa View PDF HTML (experimental) Abstract:Machine learning models are increasingly applied to biomedical data, yet their adoption in high stakes domains remains limited by poor robustness, limited interpretability, and instability of learned features under realistic data perturbations, such as missingness. In particular, models that achieve high predictive performance may still fail to inspire trust if their key features fluctuate when data completeness changes, undermining reproducibility and downstream decision-making. Here, we present CACTUS (Comprehensive Abstraction and Classification Tool for Uncovering Structures), an explainable machine learning framework explicitly designed to address these challenges in small, heterogeneous, and incomplete clinical datasets. CACTUS integrates feature abstraction, interpretable classification, and systematic feature stab...

Related Articles

[2603.17677] Adaptive Guidance for Retrieval-Augmented Masked Diffusion Models
Llms

[2603.17677] Adaptive Guidance for Retrieval-Augmented Masked Diffusion Models

Abstract page for arXiv paper 2603.17677: Adaptive Guidance for Retrieval-Augmented Masked Diffusion Models

arXiv - Machine Learning · 3 min ·
[2601.16933] Reward-Forcing: Autoregressive Video Generation with Reward Feedback
Machine Learning

[2601.16933] Reward-Forcing: Autoregressive Video Generation with Reward Feedback

Abstract page for arXiv paper 2601.16933: Reward-Forcing: Autoregressive Video Generation with Reward Feedback

arXiv - Machine Learning · 3 min ·
[2511.14617] Seer: Online Context Learning for Fast Synchronous LLM Reinforcement Learning
Llms

[2511.14617] Seer: Online Context Learning for Fast Synchronous LLM Reinforcement Learning

Abstract page for arXiv paper 2511.14617: Seer: Online Context Learning for Fast Synchronous LLM Reinforcement Learning

arXiv - Machine Learning · 4 min ·
[2510.15483] Fast Best-in-Class Regret for Contextual Bandits
Machine Learning

[2510.15483] Fast Best-in-Class Regret for Contextual Bandits

Abstract page for arXiv paper 2510.15483: Fast Best-in-Class Regret for Contextual Bandits

arXiv - Machine Learning · 3 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime