[2602.22278] RETLLM: Training and Data-Free MLLMs for Multimodal Information Retrieval

[2602.22278] RETLLM: Training and Data-Free MLLMs for Multimodal Information Retrieval

arXiv - Machine Learning 4 min read Article

Summary

The paper presents RETLLM, a novel framework for multimodal information retrieval (MMIR) that operates without the need for training or large datasets, enhancing retrieval performance through a unique querying approach.

Why It Matters

This research addresses the limitations of existing multimodal large language models (MLLMs) that require extensive pre-training and large datasets. By demonstrating effective MMIR capabilities without these requirements, it opens new avenues for efficient information retrieval in various applications, making it relevant for researchers and practitioners in AI and machine learning.

Key Takeaways

  • RETLLM enables multimodal information retrieval without training or large datasets.
  • The framework uses a coarse-then-fine pipeline for effective querying.
  • A visual enhancement module improves retrieval by re-picking forgotten visuals.
  • Extensive experiments show RETLLM outperforms traditional fine-tuned models.
  • The research highlights the inherent multimodal reasoning ability of MLLMs.

Computer Science > Information Retrieval arXiv:2602.22278 (cs) [Submitted on 25 Feb 2026] Title:RETLLM: Training and Data-Free MLLMs for Multimodal Information Retrieval Authors:Dawei Su, Dongsheng Wang View a PDF of the paper titled RETLLM: Training and Data-Free MLLMs for Multimodal Information Retrieval, by Dawei Su and 1 other authors View PDF HTML (experimental) Abstract:Multimodal information retrieval (MMIR) has gained attention for its flexibility in handling text, images, or mixed queries and candidates. Recent breakthroughs in multimodal large language models (MLLMs) boost MMIR performance by incorporating MLLM knowledge under the contrastive finetuning framework. However, they suffer from pre-training inconsistency and require large datasets. In this work, we introduce a novel framework, RetLLM, designed to query MLLMs for MMIR in a training- and data-free manner. Specifically, we formulate MMIR as a similarity score generation task and prompt MLLMs to directly predict retrieval scores in a coarse-then-fine pipeline. At the coarse stage, a top-k filtering strategy builds a small yet high-quality candidate pool for each query, enabling MLLMs to focus on semantically relevant candidates. Subsequently, the retrieval score is predicted by feeding both the query and candidate into MLLMs at the fine stage. Importantly, we propose a visual enhancement module during reasoning to help MLLMs re-pick forgotten visuals, improving retrieval. Extensive experiments on MMIR ben...

Related Articles

Llms

[R] Depth-first pruning transfers: GPT-2 → TinyLlama with stable gains and minimal loss

TL;DR: Removing the right layers (instead of shrinking all layers) makes transformer models ~8–12% smaller with only ~6–8% quality loss, ...

Reddit - Machine Learning · 1 min ·
Llms

Built a training stability monitor that detects instability before your loss curve shows anything — open sourced the core today

Been working on a weight divergence trajectory curvature approach to detecting neural network training instability. Treats weight updates...

Reddit - Artificial Intelligence · 1 min ·
Llms

This Is Not Hacking. This Is Structured Intelligence.

Watch me demonstrate everything I've been talking about—live, in real time. The Setup: Maestro University AI enrollment system Standard c...

Reddit - Artificial Intelligence · 1 min ·
Llms

[D] Howcome Muon is only being used for Transformers?

Muon has quickly been adopted in LLM training, yet we don't see it being talked about in other contexts. Searches for Muon on ConvNets tu...

Reddit - Machine Learning · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime