[2602.12542] Exploring Accurate and Transparent Domain Adaptation in Predictive Healthcare via Concept-Grounded Orthogonal Inference

[2602.12542] Exploring Accurate and Transparent Domain Adaptation in Predictive Healthcare via Concept-Grounded Orthogonal Inference

arXiv - Machine Learning 3 min read Article

Summary

The paper presents ExtraCare, a novel domain adaptation method for predictive healthcare that enhances accuracy and transparency by decomposing patient data into invariant and covariant components.

Why It Matters

This research addresses the critical issue of performance degradation in clinical event prediction models when faced with varying data distributions. By improving transparency and interpretability, ExtraCare could foster greater trust in AI applications within healthcare, which is essential for their adoption in clinical settings.

Key Takeaways

  • ExtraCare improves predictive accuracy by decomposing patient representations.
  • The model enhances transparency by providing human-understandable explanations.
  • It demonstrates superior performance on real-world EHR datasets.
  • The method preserves label information while exposing domain-specific variations.
  • Targeted ablations quantify the contributions of medical concepts.

Computer Science > Machine Learning arXiv:2602.12542 (cs) [Submitted on 13 Feb 2026] Title:Exploring Accurate and Transparent Domain Adaptation in Predictive Healthcare via Concept-Grounded Orthogonal Inference Authors:Pengfei Hu, Chang Lu, Feifan Liu, Yue Ning View a PDF of the paper titled Exploring Accurate and Transparent Domain Adaptation in Predictive Healthcare via Concept-Grounded Orthogonal Inference, by Pengfei Hu and 3 other authors View PDF HTML (experimental) Abstract:Deep learning models for clinical event prediction on electronic health records (EHR) often suffer performance degradation when deployed under different data distributions. While domain adaptation (DA) methods can mitigate such shifts, its "black-box" nature prevents widespread adoption in clinical practice where transparency is essential for trust and safety. We propose ExtraCare to decompose patient representations into invariant and covariant components. By supervising these two components and enforcing their orthogonality during training, our model preserves label information while exposing domain-specific variation at the same time for more accurate predictions than most feature alignment models. More importantly, it offers human-understandable explanations by mapping sparse latent dimensions to medical concepts and quantifying their contributions via targeted ablations. ExtraCare is evaluated on two real-world EHR datasets across multiple domain partition settings, demonstrating superior pe...

Related Articles

Machine Learning

Post Rebuttal ICML Average Scores? [D]

I have an average of 3.5. One of the reviewer gave us a 2 by bringing up a new issue he hadn't mentioned in his initial review, taking th...

Reddit - Machine Learning · 1 min ·
Machine Learning

Is "live AI video generation" a meaningful technical category or just a marketing term? [R]

Asking from a technical standpoint because I feel like the term is doing a lot of work in coverage of this space right now. Genuine real-...

Reddit - Machine Learning · 1 min ·
Open Source Ai

[D] Runtime layer on Hugging Face Transformers (no source changes) [D]

I’ve been experimenting with a runtime-layer approach to augmenting existing ML systems without modifying their source code. As a test ca...

Reddit - Machine Learning · 1 min ·
Machine Learning

Can I trick a public AI to spit out an outcome I prefer?

I am aware of an organization that evaluates proposals by feeding them into a public version of AI. Is there a way to make that AI rate m...

Reddit - Artificial Intelligence · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime