[2603.02719] An Empirical Analysis of Calibration and Selective Prediction in Multimodal Clinical Condition Classification

[2603.02719] An Empirical Analysis of Calibration and Selective Prediction in Multimodal Clinical Condition Classification

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2603.02719: An Empirical Analysis of Calibration and Selective Prediction in Multimodal Clinical Condition Classification

Computer Science > Machine Learning arXiv:2603.02719 (cs) [Submitted on 3 Mar 2026] Title:An Empirical Analysis of Calibration and Selective Prediction in Multimodal Clinical Condition Classification Authors:L. Julián Lechuga López, Farah E. Shamout, Tim G. J. Rudner View a PDF of the paper titled An Empirical Analysis of Calibration and Selective Prediction in Multimodal Clinical Condition Classification, by L. Juli\'an Lechuga L\'opez and Farah E. Shamout and Tim G. J. Rudner View PDF HTML (experimental) Abstract:As artificial intelligence systems move toward clinical deployment, ensuring reliable prediction behavior is fundamental for safety-critical decision-making tasks. One proposed safeguard is selective prediction, where models can defer uncertain predictions to human experts for review. In this work, we empirically evaluate the reliability of uncertainty-based selective prediction in multilabel clinical condition classification using multimodal ICU data. Across a range of state-of-the-art unimodal and multimodal models, we find that selective prediction can substantially degrade performance despite strong standard evaluation metrics. This failure is driven by severe class-dependent miscalibration, whereby models assign high uncertainty to correct predictions and low uncertainty to incorrect ones, particularly for underrepresented clinical conditions. Our results show that commonly used aggregate metrics can obscure these effects, limiting their ability to assess s...

Originally published on March 04, 2026. Curated by AI News.

Related Articles

Yupp shuts down after raising $33M from a16z crypto's Chris Dixon | TechCrunch
Machine Learning

Yupp shuts down after raising $33M from a16z crypto's Chris Dixon | TechCrunch

Less than a year after launching, with checks from some of the biggest names in Silicon Valley, crowdsourced AI model feedback startup Yu...

TechCrunch - AI · 4 min ·
Machine Learning

[R] Fine-tuning services report

If you have some data and want to train or run a small custom model but don't have powerful enough hardware for training, fine-tuning ser...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] Does ML have a "bible"/reference textbook at the Intermediate/Advanced level?

Hello, everyone! This is my first time posting here and I apologise if the question is, perhaps, a bit too basic for this sub-reddit. A b...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] ICML 2026 review policy debate: 100 responses suggest Policy B may score higher, while Policy A shows higher confidence

A week ago I made a thread asking whether ICML 2026’s review policy might have affected review outcomes, especially whether Policy A pape...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime