[2601.03100] Text-Guided Layer Fusion Mitigates Hallucination in Multimodal LLMs
Summary
The paper presents TGIF, a novel approach to mitigate hallucinations in multimodal large language models (MLLMs) by leveraging a text-guided fusion of visual features from multiple encoder layers.
Why It Matters
As MLLMs increasingly integrate visual and textual data, addressing hallucinations—where models generate inaccurate or unfounded information—is critical for their reliability. TGIF enhances visual grounding, improving the models' performance on various benchmarks without requiring extensive updates to the vision encoder.
Key Takeaways
- TGIF introduces a dynamic fusion of visual features based on text prompts.
- The method improves performance on hallucination, OCR, and VQA benchmarks.
- It preserves or enhances performance on other tasks like ScienceQA and GQA.
- TGIF operates without requiring updates to the vision encoder, minimizing overhead.
- The approach emphasizes the importance of utilizing the full hierarchy of visual features.
Computer Science > Computer Vision and Pattern Recognition arXiv:2601.03100 (cs) [Submitted on 6 Jan 2026 (v1), last revised 17 Feb 2026 (this version, v2)] Title:Text-Guided Layer Fusion Mitigates Hallucination in Multimodal LLMs Authors:Chenchen Lin, Sanbao Su, Rachel Luo, Yuxiao Chen, Yan Wang, Marco Pavone, Fei Miao View a PDF of the paper titled Text-Guided Layer Fusion Mitigates Hallucination in Multimodal LLMs, by Chenchen Lin and 6 other authors View PDF HTML (experimental) Abstract:Multimodal large language models (MLLMs) typically rely on a single late-layer feature from a frozen vision encoder, leaving the encoder's rich hierarchy of visual cues under-utilized. MLLMs still suffer from visually ungrounded hallucinations, often relying on language priors rather than image evidence. While many prior mitigation strategies operate on the text side, they leave the visual representation unchanged and do not exploit the rich hierarchy of features encoded across vision layers. Existing multi-layer fusion methods partially address this limitation but remain static, applying the same layer mixture regardless of the query. In this work, we introduce TGIF (Text-Guided Inter-layer Fusion), a lightweight module that treats encoder layers as depth-wise "experts" and predicts a prompt-dependent fusion of visual features. TGIF follows the principle of direct external fusion, requires no vision-encoder updates, and adds minimal overhead. Integrated into LLaVA-1.5-7B, TGIF provides...