[2601.15518] DS@GT at TREC TOT 2025: Bridging Vague Recollection with Fusion Retrieval and Learned Reranking

[2601.15518] DS@GT at TREC TOT 2025: Bridging Vague Recollection with Fusion Retrieval and Learned Reranking

arXiv - Machine Learning 3 min read Article

Summary

This paper presents a two-stage retrieval system designed for the TREC Tip-of-the-Tongue task, integrating multiple retrieval methods with learned reranking to enhance information retrieval performance.

Why It Matters

The study addresses challenges in vague recollection retrieval, a common issue in information retrieval systems. By combining various methodologies, it offers insights into improving recall and ranking accuracy, which is crucial for applications in search engines and AI-driven information systems.

Key Takeaways

  • Introduces a hybrid retrieval system combining LLM, sparse, and dense methods.
  • Utilizes topic-aware multi-index dense retrieval to enhance performance.
  • Achieves significant recall and NDCG scores, demonstrating the effectiveness of fusion retrieval.
  • Generates synthetic queries to support model training, showcasing innovative approaches in data preparation.
  • Highlights the importance of learned reranking in improving retrieval outcomes.

Computer Science > Information Retrieval arXiv:2601.15518 (cs) [Submitted on 21 Jan 2026 (v1), last revised 14 Feb 2026 (this version, v2)] Title:DS@GT at TREC TOT 2025: Bridging Vague Recollection with Fusion Retrieval and Learned Reranking Authors:Wenxin Zhou, Ritesh Mehta, Anthony Miyaguchi View a PDF of the paper titled DS@GT at TREC TOT 2025: Bridging Vague Recollection with Fusion Retrieval and Learned Reranking, by Wenxin Zhou and 2 other authors View PDF HTML (experimental) Abstract:We develop a two-stage retrieval system that combines multiple complementary retrieval methods with a learned reranker and LLM-based reranking, to address the TREC Tip-of-the-Tongue (ToT) task. In the first stage, we employ hybrid retrieval that merges LLM-based retrieval, sparse (BM25), and dense (BGE-M3) retrieval methods. We also introduce topic-aware multi-index dense retrieval that partitions the Wikipedia corpus into 24 topical domains. In the second stage, we evaluate both a trained LambdaMART reranker and LLM-based reranking. To support model training, we generate 5000 synthetic ToT queries using LLMs. Our best system achieves recall of 0.66 and NDCG@1000 of 0.41 on the test set by combining hybrid retrieval with Gemini-2.5-flash reranking, demonstrating the effectiveness of fusion retrieval. Comments: Subjects: Information Retrieval (cs.IR); Computation and Language (cs.CL); Machine Learning (cs.LG) Cite as: arXiv:2601.15518 [cs.IR]   (or arXiv:2601.15518v2 [cs.IR] for this ver...

Related Articles

Llms

[D] How to break free from LLM's chains as a PhD student?

I didn't realize but over a period of one year i have become overreliant on ChatGPT to write code, I am a second year PhD student and don...

Reddit - Machine Learning · 1 min ·
Llms

[R] Reference model free behavioral discovery of AudiBench model organisms via Probe-Mediated Adaptive Auditing

Anthropic's AuditBench - 56 Llama 3.3 70B models with planted hidden behaviors - their best agent detects the behaviros 10-13% of the tim...

Reddit - Machine Learning · 1 min ·
Llms

[P] Dante-2B: I'm training a 2.1B bilingual fully open Italian/English LLM from scratch on 2×H200. Phase 1 done — here's what I've built.

The problem If you work with Italian text and local models, you know the pain. Every open-source LLM out there treats Italian as an after...

Reddit - Machine Learning · 1 min ·
Llms

I have been coding for 11 years and I caught myself completely unable to debug a problem without AI assistance last month. That scared me more than anything I have seen in this industry.

I want to be honest about something that happened to me because I think it is more common than people admit. Last month I hit a bug in a ...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime