[2603.26683] LITTA: Late-Interaction and Test-Time Alignment for Visually-Grounded Multimodal Retrieval
About this article
Abstract page for arXiv paper 2603.26683: LITTA: Late-Interaction and Test-Time Alignment for Visually-Grounded Multimodal Retrieval
Computer Science > Information Retrieval arXiv:2603.26683 (cs) [Submitted on 10 Mar 2026] Title:LITTA: Late-Interaction and Test-Time Alignment for Visually-Grounded Multimodal Retrieval Authors:Seonok Kim View a PDF of the paper titled LITTA: Late-Interaction and Test-Time Alignment for Visually-Grounded Multimodal Retrieval, by Seonok Kim View PDF HTML (experimental) Abstract:Retrieving relevant evidence from visually rich documents such as textbooks, technical reports, and manuals is challenging due to long context, complex layouts, and weak lexical overlap between user questions and supporting pages. We propose LITTA, a query-expansion-centric retrieval framework for evidence page retrieval that improves multimodal document retrieval without retriever retraining. Given a user query, LITTA generates complementary query variants using a large language model and retrieves candidate pages for each variant using a frozen vision retriever with late-interaction scoring. Candidates from expanded queries are then aggregated through reciprocal rank fusion to improve evidence coverage and reduce sensitivity to any single phrasing. This simple test-time strategy significantly improves retrieval robustness while remaining compatible with existing multimodal embedding indices. We evaluate LITTA on visually grounded document retrieval tasks across three domains: computer science, pharmaceuticals, and industrial manuals. Multi-query retrieval consistently improves top-k accuracy, reca...