[2602.16537] Optimal training-conditional regret for online conformal prediction
Summary
The paper explores optimal training-conditional regret in online conformal prediction for non-stationary data streams, addressing distribution shifts and proposing algorithms for improved performance.
Why It Matters
This research is significant as it advances the understanding of online conformal prediction, particularly in adapting to non-stationary environments. It provides theoretical guarantees for new algorithms, which can enhance predictive accuracy in real-world applications where data distributions change over time.
Key Takeaways
- Introduces a split-conformal algorithm for adapting to abrupt changes in data distribution.
- Develops a full-conformal algorithm that incorporates drift detection for online learning.
- Establishes non-asymptotic regret guarantees matching minimax lower bounds.
- Focuses on independently generated data with two types of distribution shifts.
- Numerical experiments validate the theoretical findings.
Mathematics > Statistics Theory arXiv:2602.16537 (math) [Submitted on 18 Feb 2026] Title:Optimal training-conditional regret for online conformal prediction Authors:Jiadong Liang, Zhimei Ren, Yuxin Chen View a PDF of the paper titled Optimal training-conditional regret for online conformal prediction, by Jiadong Liang and 2 other authors View PDF Abstract:We study online conformal prediction for non-stationary data streams subject to unknown distribution drift. While most prior work studied this problem under adversarial settings and/or assessed performance in terms of gaps of time-averaged marginal coverage, we instead evaluate performance through training-conditional cumulative regret. We specifically focus on independently generated data with two types of distribution shift: abrupt change points and smooth drift. When non-conformity score functions are pretrained on an independent dataset, we propose a split-conformal style algorithm that leverages drift detection to adaptively update calibration sets, which provably achieves minimax-optimal regret. When non-conformity scores are instead trained online, we develop a full-conformal style algorithm that again incorporates drift detection to handle non-stationarity; this approach relies on stability - rather than permutation symmetry - of the model-fitting algorithm, which is often better suited to online learning under evolving environments. We establish non-asymptotic regret guarantees for our online full conformal algor...