[2603.20314] VGS-Decoding: Visual Grounding Score Guided Decoding for Hallucination Mitigation in Medical VLMs
About this article
Abstract page for arXiv paper 2603.20314: VGS-Decoding: Visual Grounding Score Guided Decoding for Hallucination Mitigation in Medical VLMs
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.20314 (cs) [Submitted on 19 Mar 2026] Title:VGS-Decoding: Visual Grounding Score Guided Decoding for Hallucination Mitigation in Medical VLMs Authors:Govinda Kolli, Adinath Madhavrao Dukre, Behzad Bozorgtabar, Dwarikanath Mahapatra, Imran Razzak View a PDF of the paper titled VGS-Decoding: Visual Grounding Score Guided Decoding for Hallucination Mitigation in Medical VLMs, by Govinda Kolli and 4 other authors View PDF HTML (experimental) Abstract:Medical Vision-Language Models (VLMs) often hallucinate by generating responses based on language priors rather than visual evidence, posing risks in clinical applications. We propose Visual Grounding Score Guided Decoding (VGS-Decoding), a training-free method to mitigate hallucinations during inference. Our key insight is that hallucinated tokens maintain or increase their probability when visual information is degraded, while visually grounded tokens decrease in probability. We introduce the Visual Grounding Score (VGS), which measures each token's visual dependency by comparing distributions from original and distorted images. During decoding, we reweight probabilities by amplifying visually grounded tokens while suppressing hallucinations. Unlike fixed-weight contrastive methods, VGS-Decoding provides per-token adaptive control. Experiments on MIMIC-Diff-VQA and VQA-RAD across LLaVA-Med, CheXagent, and MedGemma demonstrate consistent improvements, with up t...