[2602.20153] JUCAL: Jointly Calibrating Aleatoric and Epistemic Uncertainty in Classification Tasks
Summary
The paper presents JUCAL, a novel calibration method that jointly addresses aleatoric and epistemic uncertainty in classification tasks, outperforming existing techniques.
Why It Matters
Understanding and accurately calibrating uncertainties in machine learning models is crucial for reliable predictions. JUCAL offers a significant improvement over traditional methods, making it a valuable tool for practitioners in various classification tasks, enhancing model performance and efficiency.
Key Takeaways
- JUCAL effectively balances aleatoric and epistemic uncertainties in classification models.
- The method shows superior performance compared to state-of-the-art calibration techniques.
- JUCAL can be applied to various ensemble classifiers with minimal computational overhead.
- It reduces negative log-likelihood (NLL) and predictive set size significantly.
- Using JUCAL can lead to lower inference costs, enhancing efficiency.
Statistics > Machine Learning arXiv:2602.20153 (stat) [Submitted on 23 Feb 2026] Title:JUCAL: Jointly Calibrating Aleatoric and Epistemic Uncertainty in Classification Tasks Authors:Jakob Heiss, Sören Lambrecht, Jakob Weissteiner, Hanna Wutte, Žan Žurič, Josef Teichmann, Bin Yu View a PDF of the paper titled JUCAL: Jointly Calibrating Aleatoric and Epistemic Uncertainty in Classification Tasks, by Jakob Heiss and 6 other authors View PDF HTML (experimental) Abstract:We study post-calibration uncertainty for trained ensembles of classifiers. Specifically, we consider both aleatoric (label noise) and epistemic (model) uncertainty. Among the most popular and widely used calibration methods in classification are temperature scaling (i.e., pool-then-calibrate) and conformal methods. However, the main shortcoming of these calibration methods is that they do not balance the proportion of aleatoric and epistemic uncertainty. Not balancing these uncertainties can severely misrepresent predictive uncertainty, leading to overconfident predictions in some input regions while being underconfident in others. To address this shortcoming, we present a simple but powerful calibration algorithm Joint Uncertainty Calibration (JUCAL) that jointly calibrates aleatoric and epistemic uncertainty. JUCAL jointly calibrates two constants to weight and scale epistemic and aleatoric uncertainties by optimizing the negative log-likelihood (NLL) on the validation/calibration dataset. JUCAL can be appli...