[2603.22988] Robustness Quantification and Uncertainty Quantification: Comparing Two Methods for Assessing the Reliability of Classifier Predictions
About this article
Abstract page for arXiv paper 2603.22988: Robustness Quantification and Uncertainty Quantification: Comparing Two Methods for Assessing the Reliability of Classifier Predictions
Computer Science > Machine Learning arXiv:2603.22988 (cs) [Submitted on 24 Mar 2026] Title:Robustness Quantification and Uncertainty Quantification: Comparing Two Methods for Assessing the Reliability of Classifier Predictions Authors:Adrián Detavernier, Jasper De Bock View a PDF of the paper titled Robustness Quantification and Uncertainty Quantification: Comparing Two Methods for Assessing the Reliability of Classifier Predictions, by Adri\'an Detavernier and 1 other authors View PDF HTML (experimental) Abstract:We consider two approaches for assessing the reliability of the individual predictions of a classifier: Robustness Quantification (RQ) and Uncertainty Quantification (UQ). We explain the conceptual differences between the two approaches, compare both approaches on a number of benchmark datasets and show that RQ is capable of outperforming UQ, both in a standard setting and in the presence of distribution shift. Beside showing that RQ can be competitive with UQ, we also demonstrate the complementarity of RQ and UQ by showing that a combination of both approaches can lead to even better reliability assessments. Subjects: Machine Learning (cs.LG) Cite as: arXiv:2603.22988 [cs.LG] (or arXiv:2603.22988v1 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2603.22988 Focus to learn more arXiv-issued DOI via DataCite (pending registration) Submission history From: Adrián Detavernier [view email] [v1] Tue, 24 Mar 2026 09:31:13 UTC (901 KB) Full-text links: Acces...