[2502.01969] Mitigating Object Hallucinations in Large Vision-Language Models via Attention Calibration

[2502.01969] Mitigating Object Hallucinations in Large Vision-Language Models via Attention Calibration

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2502.01969: Mitigating Object Hallucinations in Large Vision-Language Models via Attention Calibration

Computer Science > Computer Vision and Pattern Recognition arXiv:2502.01969 (cs) [Submitted on 4 Feb 2025 (v1), last revised 24 Mar 2026 (this version, v2)] Title:Mitigating Object Hallucinations in Large Vision-Language Models via Attention Calibration Authors:Younan Zhu, Linwei Tao, Minjing Dong, Chang Xu View a PDF of the paper titled Mitigating Object Hallucinations in Large Vision-Language Models via Attention Calibration, by Younan Zhu and 3 other authors View PDF HTML (experimental) Abstract:Large Vision-Language Models (LVLMs) exhibit impressive multimodal reasoning capabilities but remain highly susceptible to object hallucination, where models generate responses that are not factually aligned with the visual content. Recent works attribute this issue to an inherent bias of LVLMs where the vision token attention map has spurious focus on certain positions, and propose to mitigate this issue by reordering visual tokens. However, we find that different LVLMs exhibit different correlations between attention and spatial position, which makes existing static solutions difficult to generalize to other LVLMs. To begin with, we investigate the attention bias introduced by image tokens through a toy experiment, in which a blank image is fed into the model to capture its position-dependent bias. We then remove this bias from the original attention map, which already leads to a substantial reduction in hallucinations. This proof of concept validates the core intuition behind...

Originally published on March 25, 2026. Curated by AI News.

Related Articles

Llms

🤖 AI News Digest - March 27, 2026

Today's AI news: 1. My minute-by-minute response to the LiteLLM malware attack The article describes a detailed, minute-by-minute respons...

Reddit - Artificial Intelligence · 1 min ·
Llms

[D] Real-time Student Attention Detection: ResNet vs Facial Landmarks - Which approach for resource-constrained deployment?

I have a problem statement where we are supposed to detect the attention level of student in a classroom, basically output whether he is ...

Reddit - Machine Learning · 1 min ·
Llms

[D] We audited LoCoMo: 6.4% of the answer key is wrong and the judge accepts up to 63% of intentionally wrong answers

Projects are still submitting new scores on LoCoMo as of March 2026. We audited it and found 6.4% of the answer key is wrong, and the LLM...

Reddit - Machine Learning · 1 min ·
Llms

[P] ClaudeFormer: Building a Transformer Out of Claudes — Collaboration Request

I'm looking to work with people interested in math, machine learning, or agentic coding, on creating a multi-agent framework to do fronti...

Reddit - Machine Learning · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime