[2602.10833] Training-Induced Bias Toward LLM-Generated Content in Dense Retrieval

[2602.10833] Training-Induced Bias Toward LLM-Generated Content in Dense Retrieval

arXiv - Machine Learning 4 min read Article

Summary

This study investigates the training-induced bias towards LLM-generated content in dense retrieval systems, revealing how dataset and training methods influence preferences for generated text.

Why It Matters

Understanding the biases introduced during training is crucial for improving dense retrieval systems in NLP. This research highlights the impact of training data on model performance and can inform future developments in AI and information retrieval.

Key Takeaways

  • Unsupervised retrievers do not consistently prefer LLM-generated content.
  • Supervised fine-tuning on MS MARCO shifts rankings towards LLM-generated text.
  • Fine-tuning on LLM-generated data induces significant pro-LLM bias.
  • Dataset-specific preferences emerge from in-domain fine-tuning.
  • Source bias is a training-induced phenomenon, not an inherent trait.

Computer Science > Information Retrieval arXiv:2602.10833 (cs) [Submitted on 11 Feb 2026] Title:Training-Induced Bias Toward LLM-Generated Content in Dense Retrieval Authors:William Xion, Wolfgang Nejdl View a PDF of the paper titled Training-Induced Bias Toward LLM-Generated Content in Dense Retrieval, by William Xion and 1 other authors View PDF HTML (experimental) Abstract:Dense retrieval is a promising approach for acquiring relevant context or world knowledge in open-domain natural language processing tasks and is now widely used in information retrieval applications. However, recent reports claim a broad preference for text generated by large language models (LLMs). This bias is called "source bias", and it has been hypothesized that lower perplexity contributes to this effect. In this study, we revisit this claim by conducting a controlled evaluation to trace the emergence of such preferences across training stages and data sources. Using parallel human- and LLM-generated counterparts of the SciFact and Natural Questions (NQ320K) datasets, we compare unsupervised checkpoints with models fine-tuned using in-domain human text, in-domain LLM-generated text, and MS MARCO. Our results show the following: 1) Unsupervised retrievers do not exhibit a uniform pro-LLM preference. The direction and magnitude depend on the dataset. 2) Across the settings tested, supervised fine-tuning on MS MARCO consistently shifts the rankings toward LLM-generated text. 3) In-domain fine-tuni...

Related Articles

Llms

Claude Max 20x usage hit 40% by Monday noon — how does Codex CLI compare?

I'm on Claude Max (the $100/mo plan) and noticed something that surprised me. By Monday noon I had already used 40% of the 20x monthly li...

Reddit - Artificial Intelligence · 1 min ·
How to use the new ChatGPT app integrations, including DoorDash, Spotify, Uber, and others | TechCrunch
Llms

How to use the new ChatGPT app integrations, including DoorDash, Spotify, Uber, and others | TechCrunch

Learn how to use Spotify, Canva, Figma, Expedia, and other apps directly in ChatGPT.

TechCrunch - AI · 10 min ·
Anthropic Restricts Claude Agent Access Amid AI Automation Boom in Crypto
Llms

Anthropic Restricts Claude Agent Access Amid AI Automation Boom in Crypto

AI Tools & Products · 7 min ·
Is cutting ‘please’ when talking to ChatGPT better for the planet? An expert explains
Llms

Is cutting ‘please’ when talking to ChatGPT better for the planet? An expert explains

AI Tools & Products · 5 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime