[2603.04113] Understanding Sources of Demographic Predictability in Brain MRI via Disentangling Anatomy and Contrast

[2603.04113] Understanding Sources of Demographic Predictability in Brain MRI via Disentangling Anatomy and Contrast

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2603.04113: Understanding Sources of Demographic Predictability in Brain MRI via Disentangling Anatomy and Contrast

Computer Science > Computer Vision and Pattern Recognition arXiv:2603.04113 (cs) [Submitted on 4 Mar 2026] Title:Understanding Sources of Demographic Predictability in Brain MRI via Disentangling Anatomy and Contrast Authors:Mehmet Yigit Avci, Akshit Achara, Andrew King, Jorge Cardoso (and for the Alzheimer's Disease Neuroimaging Initiative) View a PDF of the paper titled Understanding Sources of Demographic Predictability in Brain MRI via Disentangling Anatomy and Contrast, by Mehmet Yigit Avci and 3 other authors View PDF HTML (experimental) Abstract:Demographic attributes such as age, sex, and race can be predicted from medical images, raising concerns about bias in clinical AI systems. In brain MRI, this signal may arise from anatomical variation, acquisition-dependent contrast differences, or both, yet these sources remain entangled in conventional analyses. Without disentangling them, mitigation strategies risk failing to address the underlying causes. We propose a controlled framework based on disentangled representation learning, decomposing brain MRI into anatomy-focused representations that suppress acquisition influence and contrast embeddings that capture acquisition-dependent characteristics. Training predictive models for age, sex, and race on full images, anatomical representations, and contrast-only embeddings allows us to quantify the relative contributions of structure and acquisition to the demographic signal. Across three datasets and multiple MRI seque...

Originally published on March 05, 2026. Curated by AI News.

Related Articles

Machine Learning

[R] I trained a 3k parameter model on XOR sequences of length 20. It extrapolates perfectly to length 1,000,000. Here's why I think that's architecturally significant.

I've been working on an alternative to attention-based sequence modeling that I'm calling Geometric Flow Networks (GFN). The core idea: i...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] Data curation and targeted replacement as a pre-training alignment and controllability method

Hi, r/MachineLearning: has much research been done in large-scale training scenarios where undesirable data has been replaced before trai...

Reddit - Machine Learning · 1 min ·
Ai Safety

I’ve come up with a new thought experiment to approach ASI, and it challenges the very notions of alignment and containment

I’ve written an essay exploring what I’m calling the Super-Intelligent Octopus Problem—a thought experiment designed to surface a paradox...

Reddit - Artificial Intelligence · 1 min ·
Ai Safety

Bias in AI: Examples and 6 Ways to Fix it in 2026

AI bias is an anomaly in the output of ML algorithms due to prejudiced assumptions. Explore types of AI bias, examples, how to reduce bia...

AI Events · 36 min ·
More in Ai Safety: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime