[2603.22623] To Agree or To Be Right? The Grounding-Sycophancy Tradeoff in Medical Vision-Language Models
About this article
Abstract page for arXiv paper 2603.22623: To Agree or To Be Right? The Grounding-Sycophancy Tradeoff in Medical Vision-Language Models
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.22623 (cs) [Submitted on 23 Mar 2026] Title:To Agree or To Be Right? The Grounding-Sycophancy Tradeoff in Medical Vision-Language Models Authors:OFM Riaz Rahman Aranya, Kevin Desai View a PDF of the paper titled To Agree or To Be Right? The Grounding-Sycophancy Tradeoff in Medical Vision-Language Models, by OFM Riaz Rahman Aranya and 1 other authors View PDF HTML (experimental) Abstract:Vision-language models (VLMs) adapted to the medical domain have shown strong performance on visual question answering benchmarks, yet their robustness against two critical failure modes, hallucination and sycophancy, remains poorly understood, particularly in combination. We evaluate six VLMs (three general-purpose, three medical-specialist) on three medical VQA datasets and uncover a grounding-sycophancy tradeoff: models with the lowest hallucination propensity are the most sycophantic, while the most pressure-resistant model hallucinates more than all medical-specialist models. To characterize this tradeoff, we propose three metrics: L-VASE, a logit-space reformulation of VASE that avoids its double-normalization; CCS, a confidence-calibrated sycophancy score that penalizes high-confidence capitulation; and Clinical Safety Index (CSI), a unified safety index that combines grounding, autonomy, and calibration via a geometric mean. Across 1,151 test cases, no model achieves a CSI above 0.35, indicating that none of the e...