[2601.13227] Insider Knowledge: How Much Can RAG Systems Gain from Evaluation Secrets?
About this article
Abstract page for arXiv paper 2601.13227: Insider Knowledge: How Much Can RAG Systems Gain from Evaluation Secrets?
Computer Science > Information Retrieval arXiv:2601.13227 (cs) [Submitted on 19 Jan 2026 (v1), last revised 27 Mar 2026 (this version, v2)] Title:Insider Knowledge: How Much Can RAG Systems Gain from Evaluation Secrets? Authors:Laura Dietz, Bryan Li, Eugene Yang, Dawn Lawrie, William Walden, James Mayfield View a PDF of the paper titled Insider Knowledge: How Much Can RAG Systems Gain from Evaluation Secrets?, by Laura Dietz and 5 other authors View PDF HTML (experimental) Abstract:RAG systems are increasingly evaluated and optimized using LLM judges, an approach that is rapidly becoming the dominant paradigm for system assessment. Nugget-based approaches in particular are now embedded not only in evaluation frameworks but also in the architectures of RAG systems themselves. While this integration can lead to genuine improvements, it also creates a risk of faulty measurements due to circularity. In this paper, we investigate this risk through comparative experiments with nugget-based RAG systems, including Ginger and Crucible, against strong baselines such as GPT-Researcher. By deliberately modifying Crucible to generate outputs optimized for an LLM judge, we show that near-perfect evaluation scores can be achieved when elements of the evaluation - such as prompt templates or gold nuggets - are leaked or can be predicted. Our results highlight the importance of blind evaluation settings and methodological diversity to guard against mistaking metric overfitting for genuine ...