[2603.19386] TuLaBM: Tumor-Biased Latent Bridge Matching for Contrast-Enhanced MRI Synthesis

[2603.19386] TuLaBM: Tumor-Biased Latent Bridge Matching for Contrast-Enhanced MRI Synthesis

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2603.19386: TuLaBM: Tumor-Biased Latent Bridge Matching for Contrast-Enhanced MRI Synthesis

Electrical Engineering and Systems Science > Image and Video Processing arXiv:2603.19386 (eess) [Submitted on 19 Mar 2026] Title:TuLaBM: Tumor-Biased Latent Bridge Matching for Contrast-Enhanced MRI Synthesis Authors:Atharva Rege, Adinath Madhavrao Dukre, Numan Balci, Dwarikanath Mahapatra, Imran Razzak View a PDF of the paper titled TuLaBM: Tumor-Biased Latent Bridge Matching for Contrast-Enhanced MRI Synthesis, by Atharva Rege and 4 other authors View PDF HTML (experimental) Abstract:Contrast-enhanced magnetic resonance imaging (CE-MRI) plays a crucial role in brain tumor assessment; however, its acquisition requires gadolinium-based contrast agents (GBCAs), which increase costs and raise safety concerns. Consequently, synthesizing CE-MRI from non-contrast MRI (NC-MRI) has emerged as a promising alternative. Early Generative Adversarial Network (GAN)-based approaches suffered from instability and mode collapse, while diffusion models, despite impressive synthesis quality, remain computationally expensive and often fail to faithfully reproduce critical tumor contrast patterns. To address these limitations, we propose Tumor-Biased Latent Bridge Matching (TuLaBM), which formulates NC-to-CE MRI translation as Brownian bridge transport between source and target distributions in a learned latent space, enabling efficient training and inference. To enhance tumor-region fidelity, we introduce a Tumor-Biased Attention Mechanism (TuBAM) that amplifies tumor-relevant latent feature...

Originally published on March 23, 2026. Curated by AI News.

Related Articles

Ai Safety

I’ve come up with a new thought experiment to approach ASI, and it challenges the very notions of alignment and containment

I’ve written an essay exploring what I’m calling the Super-Intelligent Octopus Problem—a thought experiment designed to surface a paradox...

Reddit - Artificial Intelligence · 1 min ·
Ai Safety

Bias in AI: Examples and 6 Ways to Fix it in 2026

AI bias is an anomaly in the output of ML algorithms due to prejudiced assumptions. Explore types of AI bias, examples, how to reduce bia...

AI Events · 36 min ·
Llms

[R] I built a benchmark that catches LLMs breaking physics laws

I got tired of LLMs confidently giving wrong physics answers, so I built a benchmark that generates adversarial physics questions and gra...

Reddit - Machine Learning · 1 min ·
Machine Learning

We need to teach AI the essence of being human to reduce the risk of misalignment

One part of the alignment problem is that AI does not genuinely understand what it's like to live in the world, even though it can descri...

Reddit - Artificial Intelligence · 1 min ·
More in Ai Safety: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime