[2509.12573] No Need for Learning to Defer? A Training Free Deferral Framework to Multiple Experts through Conformal Prediction

[2509.12573] No Need for Learning to Defer? A Training Free Deferral Framework to Multiple Experts through Conformal Prediction

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2509.12573: No Need for Learning to Defer? A Training Free Deferral Framework to Multiple Experts through Conformal Prediction

Computer Science > Machine Learning arXiv:2509.12573 (cs) [Submitted on 16 Sep 2025 (v1), last revised 28 Mar 2026 (this version, v3)] Title:No Need for Learning to Defer? A Training Free Deferral Framework to Multiple Experts through Conformal Prediction Authors:Tim Bary, Benoît Macq, Louis Petit View a PDF of the paper titled No Need for Learning to Defer? A Training Free Deferral Framework to Multiple Experts through Conformal Prediction, by Tim Bary and 2 other authors View PDF HTML (experimental) Abstract:AI systems often struggle to provide reliable predictions across all inputs, motivating hybrid human-AI decision-making. Existing Learning to Defer (L2D) approaches address this by training models to selectively defer to human experts. However, these approaches require extensive training data annotated by all experts and are sensitive to changes in expert composition, necessitating costly retraining. We propose a training-free, model- and expert-agnostic framework for expert deferral based on conformal prediction. Our method leverages prediction sets from a conformal predictor to quantify label-specific uncertainty and selects the most suitable expert using a segregativity criterion, which measures how well an expert discriminates among plausible labels. Experiments across three models on CIFAR10-H and HAM10000 demonstrate that our method can reduce the number of training labels per expert by up to 91.3% while maintaining predictive accuracy in low-data regimes. Bein...

Originally published on March 31, 2026. Curated by AI News.

Related Articles

Machine Learning

Is it actually possible to build a model-agnostic persistent text layer that keeps AI behavior stable?

Is it actually possible to define a persistent, model-agnostic text-based layer (loaded with the model each time) that keeps an AI system...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

Are gamers being used as free labeling labor? The rise of "Simulators" that look like AI training grounds [D]

Hey everyone, I’m an AI news curator and editor currently working on a piece about a weird trend I’ve been spotting: technical simulators...

Reddit - Machine Learning · 1 min ·
Machine Learning

Coherence Without Convergence: A New Protocol for Multi-Agent AI

Opening For the past year, most progress in multi-agent AI has followed a familiar pattern: Add more agents. Add more coordination. Watch...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

Week 6 AIPass update - answering the top questions from last post (file conflicts, remote models, scale)

Followup to last post with answers to the top questions from the comments. Appreciate everyone who jumped in. The most common one by a mi...

Reddit - Artificial Intelligence · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime