[2604.03556] Focus Matters: Phase-Aware Suppression for Hallucination in Vision-Language Models
About this article
Abstract page for arXiv paper 2604.03556: Focus Matters: Phase-Aware Suppression for Hallucination in Vision-Language Models
Computer Science > Computer Vision and Pattern Recognition arXiv:2604.03556 (cs) [Submitted on 4 Apr 2026] Title:Focus Matters: Phase-Aware Suppression for Hallucination in Vision-Language Models Authors:Sohyeon Kim, Sang Yeon Yoon, Kyeongbo Kong View a PDF of the paper titled Focus Matters: Phase-Aware Suppression for Hallucination in Vision-Language Models, by Sohyeon Kim and 2 other authors View PDF HTML (experimental) Abstract:Large Vision-Language Models (LVLMs) have achieved impressive progress in multimodal reasoning, yet they remain prone to object hallucinations, generating descriptions of objects that are not present in the input image. Recent approaches attempt to mitigate hallucinations by suppressing unreliable visual signals in the vision encoder, but many rely on iterative optimization for each input, resulting in substantial inference latency. In this work, we investigate the internal attention dynamics of vision encoders in LVLMs and identify a consistent three-phase structure of visual information processing: diffusion, focus, and rediffusion. Our analysis reveals that hallucination behavior is particularly sensitive to tokens receiving low attention during the focus phase. Motivated by this observation, we propose a lightweight inference-time intervention that selectively suppresses such tokens during the focus phase. The method operates in a training-free manner using statistics from a single forward pass and employs a Determinantal Point Process (DPP) ...