[2602.14913] Coverage Guarantees for Pseudo-Calibrated Conformal Prediction under Distribution Shift
Summary
This paper explores coverage guarantees for pseudo-calibrated conformal prediction methods under distribution shifts, proposing a new algorithm to enhance predictive performance.
Why It Matters
Understanding how to maintain coverage guarantees in machine learning models under distribution shifts is crucial for ensuring reliability in real-world applications. This research addresses a significant challenge in conformal prediction, offering insights that can improve model robustness and performance in changing environments.
Key Takeaways
- Conformal prediction can fail under distribution shifts, impacting model reliability.
- Pseudo-calibration is proposed as a solution to mitigate performance loss.
- A new source-tuned pseudo-calibration algorithm is introduced to enhance coverage.
- Numerical experiments validate the effectiveness of the proposed methods.
- The study emphasizes the importance of adapting models to changing data distributions.
Computer Science > Machine Learning arXiv:2602.14913 (cs) [Submitted on 16 Feb 2026] Title:Coverage Guarantees for Pseudo-Calibrated Conformal Prediction under Distribution Shift Authors:Farbod Siahkali, Ashwin Verma, Vijay Gupta View a PDF of the paper titled Coverage Guarantees for Pseudo-Calibrated Conformal Prediction under Distribution Shift, by Farbod Siahkali and 2 other authors View PDF HTML (experimental) Abstract:Conformal prediction (CP) offers distribution-free marginal coverage guarantees under an exchangeability assumption, but these guarantees can fail if the data distribution shifts. We analyze the use of pseudo-calibration as a tool to counter this performance loss under a bounded label-conditional covariate shift model. Using tools from domain adaptation, we derive a lower bound on target coverage in terms of the source-domain loss of the classifier and a Wasserstein measure of the shift. Using this result, we provide a method to design pseudo-calibrated sets that inflate the conformal threshold by a slack parameter to keep target coverage above a prescribed level. Finally, we propose a source-tuned pseudo-calibration algorithm that interpolates between hard pseudo-labels and randomized labels as a function of classifier uncertainty. Numerical experiments show that our bounds qualitatively track pseudo-calibration behavior and that the source-tuned scheme mitigates coverage degradation under distribution shift while maintaining nontrivial prediction set s...