[2603.02719] An Empirical Analysis of Calibration and Selective Prediction in Multimodal Clinical Condition Classification
About this article
Abstract page for arXiv paper 2603.02719: An Empirical Analysis of Calibration and Selective Prediction in Multimodal Clinical Condition Classification
Computer Science > Machine Learning arXiv:2603.02719 (cs) [Submitted on 3 Mar 2026] Title:An Empirical Analysis of Calibration and Selective Prediction in Multimodal Clinical Condition Classification Authors:L. Julián Lechuga López, Farah E. Shamout, Tim G. J. Rudner View a PDF of the paper titled An Empirical Analysis of Calibration and Selective Prediction in Multimodal Clinical Condition Classification, by L. Juli\'an Lechuga L\'opez and Farah E. Shamout and Tim G. J. Rudner View PDF HTML (experimental) Abstract:As artificial intelligence systems move toward clinical deployment, ensuring reliable prediction behavior is fundamental for safety-critical decision-making tasks. One proposed safeguard is selective prediction, where models can defer uncertain predictions to human experts for review. In this work, we empirically evaluate the reliability of uncertainty-based selective prediction in multilabel clinical condition classification using multimodal ICU data. Across a range of state-of-the-art unimodal and multimodal models, we find that selective prediction can substantially degrade performance despite strong standard evaluation metrics. This failure is driven by severe class-dependent miscalibration, whereby models assign high uncertainty to correct predictions and low uncertainty to incorrect ones, particularly for underrepresented clinical conditions. Our results show that commonly used aggregate metrics can obscure these effects, limiting their ability to assess s...