[2603.03203] No Memorization, No Detection: Output Distribution-Based Contamination Detection in Small Language Models

[2603.03203] No Memorization, No Detection: Output Distribution-Based Contamination Detection in Small Language Models

arXiv - AI 3 min read

About this article

Abstract page for arXiv paper 2603.03203: No Memorization, No Detection: Output Distribution-Based Contamination Detection in Small Language Models

Computer Science > Artificial Intelligence arXiv:2603.03203 (cs) [Submitted on 3 Mar 2026] Title:No Memorization, No Detection: Output Distribution-Based Contamination Detection in Small Language Models Authors:Omer Sela View a PDF of the paper titled No Memorization, No Detection: Output Distribution-Based Contamination Detection in Small Language Models, by Omer Sela View PDF HTML (experimental) Abstract:CDD, or Contamination Detection via output Distribution, identifies data contamination by measuring the peakedness of a model's sampled outputs. We study the conditions under which this approach succeeds and fails on small language models ranging from 70M to 410M parameters. Using controlled contamination experiments on GSM8K, HumanEval, and MATH, we find that CDD's effectiveness depends critically on whether fine-tuning produces verbatim memorization. With low-rank adaptation, models can learn from contaminated data without memorizing it, and CDD performs at chance level even when the data is verifiably contaminated. Only when fine-tuning capacity is sufficient to induce memorization does CDD recover strong detection accuracy. Our results characterize a memorization threshold that governs detectability and highlight a practical consideration: parameter-efficient fine-tuning can produce contamination that output-distribution methods do not detect. Our code is available at this https URL Comments: Subjects: Artificial Intelligence (cs.AI); Computation and Language (cs.CL)...

Originally published on March 04, 2026. Curated by AI News.

Related Articles

Llms

Have Companies Began Adopting Claude Co-Work at an Enterprise Level?

Hi Guys, My company is considering purchasing the Claude Enterprise plan. The main two constraints are: - Being able to block usage of Cl...

Reddit - Artificial Intelligence · 1 min ·
Llms

What I learned about multi-agent coordination running 9 specialized Claude agents

I've been experimenting with multi-agent AI systems and ended up building something more ambitious than I originally planned: a fully ope...

Reddit - Artificial Intelligence · 1 min ·
Llms

[D] The problem with comparing AI memory system benchmarks — different evaluation methods make scores meaningless

I've been reviewing how various AI memory systems evaluate their performance and noticed a fundamental issue with cross-system comparison...

Reddit - Machine Learning · 1 min ·
Shifting to AI model customization is an architectural imperative | MIT Technology Review
Llms

Shifting to AI model customization is an architectural imperative | MIT Technology Review

In the early days of large language models (LLMs), we grew accustomed to massive 10x jumps in reasoning and coding capability with every ...

MIT Technology Review · 6 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime