[2602.21428] PSF-Med: Measuring and Explaining Paraphrase Sensitivity in Medical Vision Language Models
Summary
The paper introduces PSF-Med, a benchmark assessing paraphrase sensitivity in medical vision language models, revealing significant variability in response consistency across models.
Why It Matters
Understanding paraphrase sensitivity in medical VLMs is crucial for ensuring reliable AI applications in healthcare. The findings highlight potential risks in clinical settings where model responses may change based on question phrasing, emphasizing the need for robust evaluations beyond accuracy metrics.
Key Takeaways
- PSF-Med benchmarks reveal flip rates of 8% to 58% in medical VLMs.
- Low flip rates do not guarantee visual grounding; some models rely heavily on language priors.
- Identifying and modifying specific model features can significantly reduce paraphrase sensitivity.
- Robustness evaluations should include tests for both paraphrase stability and image reliance.
- The study suggests that traditional accuracy metrics may be insufficient for assessing model reliability.
Computer Science > Computer Vision and Pattern Recognition arXiv:2602.21428 (cs) [Submitted on 24 Feb 2026] Title:PSF-Med: Measuring and Explaining Paraphrase Sensitivity in Medical Vision Language Models Authors:Binesh Sadanandan, Vahid Behzadan View a PDF of the paper titled PSF-Med: Measuring and Explaining Paraphrase Sensitivity in Medical Vision Language Models, by Binesh Sadanandan and 1 other authors View PDF HTML (experimental) Abstract:Medical Vision Language Models (VLMs) can change their answers when clinicians rephrase the same question, which raises deployment risks. We introduce Paraphrase Sensitivity Failure (PSF)-Med, a benchmark of 19,748 chest Xray questions paired with about 92,000 meaningpreserving paraphrases across MIMIC-CXR and PadChest. Across six medical VLMs, we measure yes/no flips for the same image and find flip rates from 8% to 58%. However, low flip rate does not imply visual grounding: text-only baselines show that some models stay consistent even when the image is removed, suggesting they rely on language priors. To study mechanisms in one model, we apply GemmaScope 2 Sparse Autoencoders (SAEs) to MedGemma 4B and analyze FlipBank, a curated set of 158 flip cases. We identify a sparse feature at layer 17 that correlates with prompt framing and predicts decision margin shifts. In causal patching, removing this feature's contribution recovers 45% of the yesminus-no logit margin on average and fully reverses 15% of flips. Acting on this finding...