[2602.12413] Soft Contamination Means Benchmarks Test Shallow Generalization
Summary
This paper explores how soft contamination in training data affects the evaluation of large language models (LLMs) on benchmarks, revealing that benchmark performance may not accurately reflect true generalization capabilities.
Why It Matters
Understanding the impact of soft contamination on benchmark tests is crucial for accurately assessing the performance of AI models. This research highlights potential biases in evaluation metrics, which can mislead developers and researchers in the pursuit of genuine advancements in AI capabilities.
Key Takeaways
- Soft contamination from benchmark data can skew performance evaluations of LLMs.
- Semantic duplicates in training data lead to inflated benchmark performance.
- Traditional decontamination methods may not effectively address semantic duplicates.
- The prevalence of soft contamination complicates the interpretation of benchmark gains.
- Improving model performance on benchmarks may not equate to genuine capability improvements.
Computer Science > Machine Learning arXiv:2602.12413 (cs) [Submitted on 12 Feb 2026] Title:Soft Contamination Means Benchmarks Test Shallow Generalization Authors:Ari Spiesberger, Juan J. Vazquez, Nicky Pochinkov, Tomáš Gavenčiak, Peli Grietzer, Gavin Leech, Nandi Schoots View a PDF of the paper titled Soft Contamination Means Benchmarks Test Shallow Generalization, by Ari Spiesberger and 6 other authors View PDF HTML (experimental) Abstract:If LLM training data is polluted with benchmark test data, then benchmark performance gives biased estimates of out-of-distribution (OOD) generalization. Typical decontamination filters use n-gram matching which fail to detect semantic duplicates: sentences with equivalent (or near-equivalent) content that are not close in string space. We study this soft contamination of training data by semantic duplicates. Among other experiments, we embed the Olmo3 training corpus and find that: 1) contamination remains widespread, e.g. we find semantic duplicates for 78% of CodeForces and exact duplicates for 50% of ZebraLogic problems; 2) including semantic duplicates of benchmark data in training does improve benchmark performance; and 3) when finetuning on duplicates of benchmark datapoints, performance also improves on truly-held-out datapoints from the same benchmark. We argue that recent benchmark gains are thus confounded: the prevalence of soft contamination means gains reflect both genuine capability improvements and the accumulation of t...