[2602.16537] Optimal training-conditional regret for online conformal prediction

[2602.16537] Optimal training-conditional regret for online conformal prediction

arXiv - Machine Learning 3 min read Article

Summary

The paper explores optimal training-conditional regret in online conformal prediction for non-stationary data streams, addressing distribution shifts and proposing algorithms for improved performance.

Why It Matters

This research is significant as it advances the understanding of online conformal prediction, particularly in adapting to non-stationary environments. It provides theoretical guarantees for new algorithms, which can enhance predictive accuracy in real-world applications where data distributions change over time.

Key Takeaways

  • Introduces a split-conformal algorithm for adapting to abrupt changes in data distribution.
  • Develops a full-conformal algorithm that incorporates drift detection for online learning.
  • Establishes non-asymptotic regret guarantees matching minimax lower bounds.
  • Focuses on independently generated data with two types of distribution shifts.
  • Numerical experiments validate the theoretical findings.

Mathematics > Statistics Theory arXiv:2602.16537 (math) [Submitted on 18 Feb 2026] Title:Optimal training-conditional regret for online conformal prediction Authors:Jiadong Liang, Zhimei Ren, Yuxin Chen View a PDF of the paper titled Optimal training-conditional regret for online conformal prediction, by Jiadong Liang and 2 other authors View PDF Abstract:We study online conformal prediction for non-stationary data streams subject to unknown distribution drift. While most prior work studied this problem under adversarial settings and/or assessed performance in terms of gaps of time-averaged marginal coverage, we instead evaluate performance through training-conditional cumulative regret. We specifically focus on independently generated data with two types of distribution shift: abrupt change points and smooth drift. When non-conformity score functions are pretrained on an independent dataset, we propose a split-conformal style algorithm that leverages drift detection to adaptively update calibration sets, which provably achieves minimax-optimal regret. When non-conformity scores are instead trained online, we develop a full-conformal style algorithm that again incorporates drift detection to handle non-stationarity; this approach relies on stability - rather than permutation symmetry - of the model-fitting algorithm, which is often better suited to online learning under evolving environments. We establish non-asymptotic regret guarantees for our online full conformal algor...

Related Articles

Machine Learning

[D] ICML Rebuttle Acknowledgement

I've received 3 out of 4 acknowledgements, All of them basically are choosing Option A without changing their scores, because their initi...

Reddit - Machine Learning · 1 min ·
Improving AI models’ ability to explain their predictions
Machine Learning

Improving AI models’ ability to explain their predictions

AI News - General · 9 min ·
Machine Learning

Auto agent - Self improving domain expertise agent

someone opensource an ai agent that autonomously upgraded itself to #1 across multiple domains in < 24 hours…. then open sourced the e...

Reddit - Artificial Intelligence · 1 min ·
UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime