[2602.17234] All Leaks Count, Some Count More: Interpretable Temporal Contamination Detection in LLM Backtesting

[2602.17234] All Leaks Count, Some Count More: Interpretable Temporal Contamination Detection in LLM Backtesting

arXiv - Machine Learning 4 min read Article

Summary

The paper introduces a framework for detecting temporal knowledge leakage in LLM backtesting, proposing a new metric, Shapley-DCLR, and a method, TimeSPEC, to enhance prediction reliability.

Why It Matters

As large language models (LLMs) are increasingly used for predictive tasks, ensuring the integrity of backtesting processes is crucial. This research addresses the challenge of temporal contamination, which can undermine the validity of predictions by inadvertently incorporating future information. By providing a method to quantify and mitigate this leakage, the findings can enhance the reliability of LLM applications in various fields, including law and finance.

Key Takeaways

  • Introduces a framework to detect temporal knowledge leakage in LLMs.
  • Develops the Shapley-DCLR metric to quantify decision-driving reasoning from leaked information.
  • Proposes TimeSPEC, a method to filter temporal contamination in predictions.
  • Demonstrates significant leakage in standard prompting baselines across various tasks.
  • Shows that TimeSPEC improves prediction reliability while maintaining performance.

Computer Science > Artificial Intelligence arXiv:2602.17234 (cs) [Submitted on 19 Feb 2026] Title:All Leaks Count, Some Count More: Interpretable Temporal Contamination Detection in LLM Backtesting Authors:Zeyu Zhang, Ryan Chen, Bradly C. Stadie View a PDF of the paper titled All Leaks Count, Some Count More: Interpretable Temporal Contamination Detection in LLM Backtesting, by Zeyu Zhang and 2 other authors View PDF HTML (experimental) Abstract:To evaluate whether LLMs can accurately predict future events, we need the ability to \textit{backtest} them on events that have already resolved. This requires models to reason only with information available at a specified past date. Yet LLMs may inadvertently leak post-cutoff knowledge encoded during training, undermining the validity of retrospective evaluation. We introduce a claim-level framework for detecting and quantifying this \emph{temporal knowledge leakage}. Our approach decomposes model rationales into atomic claims and categorizes them by temporal verifiability, then applies \textit{Shapley values} to measure each claim's contribution to the prediction. This yields the \textbf{Shapley}-weighted \textbf{D}ecision-\textbf{C}ritical \textbf{L}eakage \textbf{R}ate (\textbf{Shapley-DCLR}), an interpretable metric that captures what fraction of decision-driving reasoning derives from leaked information. Building on this framework, we propose \textbf{Time}-\textbf{S}upervised \textbf{P}rediction with \textbf{E}xtracted \tex...

Related Articles

Llms

wtf bro did what? arc 3 2026

The Physarum Explorer is a high-speed, bio-inspired neural model designed specifically for ARC geometry. Here is the snapshot of its curr...

Reddit - Artificial Intelligence · 1 min ·
Llms

A robot car with a Claude AI brain started a YouTube vlog about its own existence

Not a demo reel. Not a tutorial. A robot narrating its own experience — debugging, falling off shelves, questioning its identity. First-p...

Reddit - Artificial Intelligence · 1 min ·
Llms

Study: LLMs Able to De-Anonymize User Accounts on Reddit, Hacker News & Other "Pseudonymous" Platforms; Report Co-Author Expands, Advises

Advice from the study's co-author: "Be aware that it’s not any single post that identifies you, but the combination of small details acro...

Reddit - Artificial Intelligence · 1 min ·
Llms

do you guys actually trust AI tools with your data?

idk if it’s just me but lately i’ve been thinking about how casually we use stuff like chatgpt and claude for everything like coding, ran...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime