[2512.13872] Measuring Uncertainty Calibration
About this article
Abstract page for arXiv paper 2512.13872: Measuring Uncertainty Calibration
Computer Science > Machine Learning arXiv:2512.13872 (cs) [Submitted on 15 Dec 2025 (v1), last revised 5 Mar 2026 (this version, v3)] Title:Measuring Uncertainty Calibration Authors:Kamil Ciosek, Nicolò Felicioni, Sina Ghiassian, Juan Elenter Litwin, Francesco Tonolini, David Gustafsson, Eva Garcia-Martin, Carmen Barcena Gonzalez, Raphaëlle Bertrand-Lalo View a PDF of the paper titled Measuring Uncertainty Calibration, by Kamil Ciosek and 8 other authors View PDF HTML (experimental) Abstract:We make two contributions to the problem of estimating the $L_1$ calibration error of a binary classifier from a finite dataset. First, we provide an upper bound for any classifier where the calibration function has bounded variation. Second, we provide a method of modifying any classifier so that its calibration error can be upper bounded efficiently without significantly impacting classifier performance and without any restrictive assumptions. All our results are non-asymptotic and distribution-free. We conclude by providing advice on how to measure calibration error in practice. Our methods yield practical procedures that can be run on real-world datasets with modest overhead. Comments: Subjects: Machine Learning (cs.LG) Cite as: arXiv:2512.13872 [cs.LG] (or arXiv:2512.13872v3 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2512.13872 Focus to learn more arXiv-issued DOI via DataCite Submission history From: Nicolò Felicioni [view email] [v1] Mon, 15 Dec 2025 20:03:16 U...