[2602.16723] Is Mamba Reliable for Medical Imaging?

[2602.16723] Is Mamba Reliable for Medical Imaging?

arXiv - AI 3 min read Article

Summary

This paper evaluates the reliability of Mamba, a state-space model, for medical imaging under various attack scenarios, highlighting vulnerabilities and the need for robust defenses.

Why It Matters

As medical imaging increasingly relies on AI models, understanding their vulnerabilities is crucial for ensuring patient safety and accuracy in diagnostics. This research sheds light on the robustness of Mamba against adversarial attacks, which is vital for its deployment in real-world applications.

Key Takeaways

  • Mamba offers efficient processing for medical imaging but has notable vulnerabilities.
  • The study tests Mamba against various adversarial attacks, revealing significant impacts on accuracy.
  • Defensive strategies are necessary for safe deployment of Mamba in medical contexts.

Computer Science > Cryptography and Security arXiv:2602.16723 (cs) [Submitted on 16 Feb 2026] Title:Is Mamba Reliable for Medical Imaging? Authors:Banafsheh Saber Latibari, Najmeh Nazari, Daniel Brignac, Hossein Sayadi, Houman Homayoun, Abhijit Mahalanobis View a PDF of the paper titled Is Mamba Reliable for Medical Imaging?, by Banafsheh Saber Latibari and 5 other authors View PDF HTML (experimental) Abstract:State-space models like Mamba offer linear-time sequence processing and low memory, making them attractive for medical imaging. However, their robustness under realistic software and hardware threat models remains underexplored. This paper evaluates Mamba on multiple MedM-NIST classification benchmarks under input-level attacks, including white-box adversarial perturbations (FGSM/PGD), occlusion-based PatchDrop, and common acquisition corruptions (Gaussian noise and defocus blur) as well as hardware-inspired fault attacks emulated in software via targeted and random bit-flip injections into weights and activations. We profile vulnerabilities and quantify impacts on accuracy indicating that defenses are needed for deployment. Comments: Subjects: Cryptography and Security (cs.CR); Artificial Intelligence (cs.AI) Cite as: arXiv:2602.16723 [cs.CR]   (or arXiv:2602.16723v1 [cs.CR] for this version)   https://doi.org/10.48550/arXiv.2602.16723 Focus to learn more arXiv-issued DOI via DataCite Submission history From: Banafsheh Saber Latibari [view email] [v1] Mon, 16 Feb 20...

Related Articles

Llms

World models will be the next big thing, bye-bye LLMs

Was at Nvidia's GTC conference recently and honestly, it was one of the most eye-opening events I've attended in a while. There was a lot...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

[D] Got my first offer after months of searching — below posted range, contract-to-hire, and worried it may pause my search. Do I take it?

I could really use some outside perspective. I’m a senior ML/CV engineer in Canada with about 5–6 years across research and industry. Mas...

Reddit - Machine Learning · 1 min ·
Machine Learning

[Research] AI training is bad, so I started an research

Hello, I started researching about AI training Q:Why? R: Because AI training is bad right now. Q: What do you mean its bad? R: Like when ...

Reddit - Machine Learning · 1 min ·
Machine Learning

[P] Unix philosophy for ML pipelines: modular, swappable stages with typed contracts

We built an open-source prototype that applies Unix philosophy to retrieval pipelines. Each stage (PII redaction, chunking, dedup, embedd...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime