[2602.18536] Triggering hallucinations in model-based MRI reconstruction via adversarial perturbations

[2602.18536] Triggering hallucinations in model-based MRI reconstruction via adversarial perturbations

arXiv - Machine Learning 4 min read Research

Summary

This paper investigates how adversarial perturbations can induce hallucinations in generative models used for MRI reconstruction, highlighting potential risks in medical imaging.

Why It Matters

Understanding the susceptibility of generative models to adversarial attacks is crucial for improving the safety and reliability of medical imaging. Hallucinations in MRI reconstructions can lead to misdiagnoses, posing significant risks to patient health. This research underscores the need for robust detection methods and adversarial training to mitigate these risks.

Key Takeaways

  • Generative models for MRI reconstruction are vulnerable to adversarial perturbations.
  • Hallucinations can lead to incorrect diagnoses, endangering patient health.
  • Traditional image quality metrics fail to detect these hallucinations.
  • Adversarial training may help reduce the occurrence of hallucinations.
  • Novel detection methods are necessary to identify hallucinations in medical imaging.

Electrical Engineering and Systems Science > Image and Video Processing arXiv:2602.18536 (eess) [Submitted on 20 Feb 2026] Title:Triggering hallucinations in model-based MRI reconstruction via adversarial perturbations Authors:Suna Buğday, Yvan Saeys, Jonathan Peck View a PDF of the paper titled Triggering hallucinations in model-based MRI reconstruction via adversarial perturbations, by Suna Bu\u{g}day and 2 other authors View PDF HTML (experimental) Abstract:Generative models are increasingly used to improve the quality of medical imaging, such as reconstruction of magnetic resonance images and computed tomography. However, it is well-known that such models are susceptible to hallucinations: they may insert features into the reconstructed image which are not actually present in the original image. In a medical setting, such hallucinations may endanger patient health as they can lead to incorrect diagnoses. In this work, we aim to quantify the extent to which state-of-the-art generative models suffer from hallucinations in the context of magnetic resonance image reconstruction. Specifically, we craft adversarial perturbations resembling random noise for the unprocessed input images which induce hallucinations when reconstructed using a generative model. We perform this evaluation on the brain and knee images from the fastMRI data set using UNet and end-to-end VarNet architectures to reconstruct the images. Our results show that these models are highly susceptible to small...

Related Articles

Machine Learning

What to expect from AlphaZero's value predictions [D]

An AlphaZero agent has learnt to predict the value of a game state by training on data generated by self-play by the model and a series o...

Reddit - Machine Learning · 1 min ·
Machine Learning

Open Source Projects related to CNNs to Contribute To? [D]

Around a decade a go I was tinkering a lot with CNNs for real time event detection. I enjoyed that a lot and always wanted to get back in...

Reddit - Machine Learning · 1 min ·
I Work in Hollywood. Everyone Who Used to Make TV Is Now Secretly Training AI | WIRED
Machine Learning

I Work in Hollywood. Everyone Who Used to Make TV Is Now Secretly Training AI | WIRED

For screenwriters like me—and job seekers all over—AI gig work is the new waiting tables. In eight months, I’ve done 20 of these soul-cru...

Wired - AI · 27 min ·
Machine Learning

Are Enterprises Using AI in the Wrong Places?

Most enterprise AI discussions still revolve around one question: But I’m starting to think that may be the wrong question entirely. The ...

Reddit - Artificial Intelligence · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime