[2507.08150] CLEAR: Calibrated Learning for Epistemic and Aleatoric Risk
About this article
Abstract page for arXiv paper 2507.08150: CLEAR: Calibrated Learning for Epistemic and Aleatoric Risk
Statistics > Machine Learning arXiv:2507.08150 (stat) [Submitted on 10 Jul 2025 (v1), last revised 3 Mar 2026 (this version, v3)] Title:CLEAR: Calibrated Learning for Epistemic and Aleatoric Risk Authors:Ilia Azizi, Juraj Bodik, Jakob Heiss, Bin Yu View a PDF of the paper titled CLEAR: Calibrated Learning for Epistemic and Aleatoric Risk, by Ilia Azizi and 3 other authors View PDF HTML (experimental) Abstract:Accurate uncertainty quantification is critical for reliable predictive modeling. Existing methods typically address either aleatoric uncertainty due to measurement noise or epistemic uncertainty resulting from limited data, but not both in a balanced manner. We propose CLEAR, a calibration method with two distinct parameters, $\gamma_1$ and $\gamma_2$, to combine the two uncertainty components and improve the conditional coverage of predictive intervals for regression tasks. CLEAR is compatible with any pair of aleatoric and epistemic estimators; we show how it can be used with (i) quantile regression for aleatoric uncertainty and (ii) ensembles drawn from the Predictability-Computability-Stability (PCS) framework for epistemic uncertainty. Across 17 diverse real-world datasets, CLEAR achieves an average improvement of 28.3\% and 17.5\% in the interval width compared to the two individually calibrated baselines while maintaining nominal coverage. Similar improvements are observed when applying CLEAR to Deep Ensembles (epistemic) and Simultaneous Quantile Regression (...