[2603.00511] Multimodal Adaptive Retrieval Augmented Generation through Internal Representation Learning
About this article
Abstract page for arXiv paper 2603.00511: Multimodal Adaptive Retrieval Augmented Generation through Internal Representation Learning
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.00511 (cs) [Submitted on 28 Feb 2026] Title:Multimodal Adaptive Retrieval Augmented Generation through Internal Representation Learning Authors:Ruoshuang Du, Xin Sun, Qiang Liu, Bowen Song, Zhongqi Chen, Weiqiang Wang, Liang Wang View a PDF of the paper titled Multimodal Adaptive Retrieval Augmented Generation through Internal Representation Learning, by Ruoshuang Du and 5 other authors View PDF HTML (experimental) Abstract:Visual Question Answering systems face reliability issues due to hallucinations, where models generate answers misaligned with visual input or factual knowledge. While Retrieval Augmented Generation frameworks mitigate this issue by incorporating external knowledge, static retrieval often introduces irrelevant or conflicting content, particularly in visual RAG settings where visually similar but semantically incorrect evidence may be retrieved. To address this, we propose Multimodal Adaptive RAG (MMA-RAG), which dynamically assesses the confidence in the internal knowledge of the model to decide whether to incorporate the retrieved external information into the generation process. Central to MMA-RAG is a decision classifier trained through a layer-wise analysis, which leverages joint internal visual and textual representations to guide the use of reverse image retrieval. Experiments demonstrated that the model achieves a significant improvement in response performance in three VQA dat...