[2602.08470] Learning Credal Ensembles via Distributionally Robust Optimization

[2602.08470] Learning Credal Ensembles via Distributionally Robust Optimization

arXiv - Machine Learning 4 min read Article

Summary

This paper presents CreDRO, a novel approach to learning credal ensembles using distributionally robust optimization, enhancing model robustness against distribution shifts.

Why It Matters

The research addresses a critical gap in understanding epistemic uncertainty in machine learning models, particularly in scenarios where distribution shifts occur. By improving the robustness of models, this work has significant implications for applications in fields like medical diagnostics and out-of-distribution detection.

Key Takeaways

  • CreDRO captures epistemic uncertainty from both training randomness and distribution shifts.
  • The method outperforms existing credal approaches in various benchmarks.
  • It offers a principled framework for quantifying predictive uncertainty.
  • The research highlights the importance of robust optimization in machine learning.
  • Applications include selective classification and out-of-distribution detection.

Computer Science > Machine Learning arXiv:2602.08470 (cs) [Submitted on 9 Feb 2026 (v1), last revised 26 Feb 2026 (this version, v2)] Title:Learning Credal Ensembles via Distributionally Robust Optimization Authors:Kaizheng Wang, Ghifari Adam Faza, Fabio Cuzzolin, Siu Lun Chau, David Moens, Hans Hallez View a PDF of the paper titled Learning Credal Ensembles via Distributionally Robust Optimization, by Kaizheng Wang and 5 other authors View PDF HTML (experimental) Abstract:Credal predictors are models that are aware of epistemic uncertainty and produce a convex set of probabilistic predictions. They offer a principled way to quantify predictive epistemic uncertainty (EU) and have been shown to improve model robustness in various settings. However, most state-of-the-art methods mainly define EU as disagreement caused by random training initializations, which mostly reflects sensitivity to optimization randomness rather than uncertainty from deeper sources. To address this, we define EU as disagreement among models trained with varying relaxations of the i.i.d. assumption between training and test data. Based on this idea, we propose CreDRO, which learns an ensemble of plausible models through distributionally robust optimization. As a result, CreDRO captures EU not only from training randomness but also from meaningful disagreement due to potential distribution shifts between training and test data. Empirical results show that CreDRO consistently outperforms existing credal...

Related Articles

Hub Group Using AI, Machine Learning for Real-Time Visibility of Shipments
Machine Learning

Hub Group Using AI, Machine Learning for Real-Time Visibility of Shipments

AI Events · 4 min ·
Llms

Von Hammerstein’s Ghost: What a Prussian General’s Officer Typology Can Teach Us About AI Misalignment

Greetings all - I've posted mostly in r/claudecode and r/aigamedev a couple of times previously. Working with CC for personal projects re...

Reddit - Artificial Intelligence · 1 min ·
Llms

World models will be the next big thing, bye-bye LLMs

Was at Nvidia's GTC conference recently and honestly, it was one of the most eye-opening events I've attended in a while. There was a lot...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

[D] Got my first offer after months of searching — below posted range, contract-to-hire, and worried it may pause my search. Do I take it?

I could really use some outside perspective. I’m a senior ML/CV engineer in Canada with about 5–6 years across research and industry. Mas...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime