[2508.02566] Model-Agnostic Dynamic Feature Selection with Uncertainty Quantification

[2508.02566] Model-Agnostic Dynamic Feature Selection with Uncertainty Quantification

arXiv - Machine Learning 4 min read Article

Summary

This paper presents a model-agnostic framework for dynamic feature selection (DFS) that incorporates uncertainty quantification, addressing limitations of existing methods in high-stakes decision-making scenarios.

Why It Matters

Dynamic feature selection is crucial in machine learning, especially under resource constraints. This research enhances the reliability of feature selection by integrating uncertainty quantification, which is vital for applications in critical fields such as healthcare and finance where decision-making accuracy is paramount.

Key Takeaways

  • Introduces a model-agnostic DFS framework compatible with existing classifiers.
  • Highlights the importance of uncertainty quantification in feature selection.
  • Demonstrates competitive accuracy against state-of-the-art DFS methods.
  • Identifies new sources of uncertainty unique to dynamic feature selection.
  • Emphasizes the need for uncertainty-aware approaches in high-stakes scenarios.

Computer Science > Machine Learning arXiv:2508.02566 (cs) [Submitted on 4 Aug 2025 (v1), last revised 18 Feb 2026 (this version, v3)] Title:Model-Agnostic Dynamic Feature Selection with Uncertainty Quantification Authors:Javier Fumanal-Idocin, Raquel Fernandez-Peralta, Javier Andreu-Perez View a PDF of the paper titled Model-Agnostic Dynamic Feature Selection with Uncertainty Quantification, by Javier Fumanal-Idocin and 2 other authors View PDF HTML (experimental) Abstract:Dynamic feature selection (DFS) addresses budget constraints in decision-making by sequentially acquiring features for each instance, making it appealing for resource-limited scenarios. However, existing DFS methods require models specifically designed for the sequential acquisition setting, limiting compatibility with models already deployed in practice. Furthermore, they provide limited uncertainty quantification, undermining trust in high-stakes decisions. In this work, we show that DFS introduces new uncertainty sources compared to the static setting. We formalise how model adaptation to feature subsets induces epistemic uncertainty, how standard imputation strategies bias aleatoric uncertainty estimation, and why predictive confidence fails to discriminate between good and bad selection policies. We also propose a model-agnostic DFS framework compatible with pre-trained classifiers, including interpretable-by-design models, through efficient subset reparametrization strategies. Empirical evaluation ...

Related Articles

UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Accelerating science with AI and simulations
Machine Learning

Accelerating science with AI and simulations

MIT Professor Rafael Gómez-Bombarelli discusses the transformative potential of AI in scientific research, emphasizing its role in materi...

AI News - General · 10 min ·
Sam Altman's Coworkers Say He Can Barely Code and Misunderstands Basic Machine Learning Concepts
Machine Learning

Sam Altman's Coworkers Say He Can Barely Code and Misunderstands Basic Machine Learning Concepts

AI News - General · 2 min ·
Machine Learning

AI model suggests CPAP can massively swing heart risk in sleep apnea

AI News - General · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime