[2602.18536] Triggering hallucinations in model-based MRI reconstruction via adversarial perturbations

[2602.18536] Triggering hallucinations in model-based MRI reconstruction via adversarial perturbations

arXiv - Machine Learning 4 min read Research

Summary

This paper investigates how adversarial perturbations can induce hallucinations in generative models used for MRI reconstruction, highlighting potential risks in medical imaging.

Why It Matters

Understanding the susceptibility of generative models to adversarial attacks is crucial for improving the safety and reliability of medical imaging. Hallucinations in MRI reconstructions can lead to misdiagnoses, posing significant risks to patient health. This research underscores the need for robust detection methods and adversarial training to mitigate these risks.

Key Takeaways

  • Generative models for MRI reconstruction are vulnerable to adversarial perturbations.
  • Hallucinations can lead to incorrect diagnoses, endangering patient health.
  • Traditional image quality metrics fail to detect these hallucinations.
  • Adversarial training may help reduce the occurrence of hallucinations.
  • Novel detection methods are necessary to identify hallucinations in medical imaging.

Electrical Engineering and Systems Science > Image and Video Processing arXiv:2602.18536 (eess) [Submitted on 20 Feb 2026] Title:Triggering hallucinations in model-based MRI reconstruction via adversarial perturbations Authors:Suna Buğday, Yvan Saeys, Jonathan Peck View a PDF of the paper titled Triggering hallucinations in model-based MRI reconstruction via adversarial perturbations, by Suna Bu\u{g}day and 2 other authors View PDF HTML (experimental) Abstract:Generative models are increasingly used to improve the quality of medical imaging, such as reconstruction of magnetic resonance images and computed tomography. However, it is well-known that such models are susceptible to hallucinations: they may insert features into the reconstructed image which are not actually present in the original image. In a medical setting, such hallucinations may endanger patient health as they can lead to incorrect diagnoses. In this work, we aim to quantify the extent to which state-of-the-art generative models suffer from hallucinations in the context of magnetic resonance image reconstruction. Specifically, we craft adversarial perturbations resembling random noise for the unprocessed input images which induce hallucinations when reconstructed using a generative model. We perform this evaluation on the brain and knee images from the fastMRI data set using UNet and end-to-end VarNet architectures to reconstruct the images. Our results show that these models are highly susceptible to small...

Related Articles

UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Machine Learning

[D] Looking for definition of open-world ish learning problem

Hello! Recently I did a project where I initially had around 30 target classes. But at inference, the model had to be able to handle a lo...

Reddit - Machine Learning · 1 min ·
Mystery Shopping Meets Machine Learning: Can Algorithms Become the Ultimate Customer Experience Auditor?
Machine Learning

Mystery Shopping Meets Machine Learning: Can Algorithms Become the Ultimate Customer Experience Auditor?

Customer expectations across Africa are shifting faster than most organisations can track. A single inconsistent interaction can ignite a...

AI News - General · 8 min ·
Machine Learning

GitHub to Use User Data for AI Training by Default

submitted by /u/i-drake [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime