[2508.14734] AFABench: A Generic Framework for Benchmarking Active Feature Acquisition
Summary
AFABench introduces a benchmark framework for Active Feature Acquisition (AFA), addressing the need for standardized evaluation of AFA methods across diverse datasets and acquisition strategies.
Why It Matters
This framework is crucial for advancing research in Active Feature Acquisition, as it provides a systematic way to evaluate and compare various AFA strategies. By offering a modular design and diverse datasets, AFABench can significantly enhance the understanding of trade-offs in feature acquisition, which is vital for practical applications in machine learning.
Key Takeaways
- AFABench is the first standardized benchmark for Active Feature Acquisition.
- It includes a variety of synthetic and real-world datasets for comprehensive evaluation.
- The framework supports multiple acquisition policies, facilitating easy integration of new methods.
- A novel dataset, CUBE-NM, is introduced to test the limitations of myopic selection strategies.
- Results provide insights into the trade-offs between different AFA strategies for future research.
Computer Science > Machine Learning arXiv:2508.14734 (cs) [Submitted on 20 Aug 2025 (v1), last revised 22 Feb 2026 (this version, v3)] Title:AFABench: A Generic Framework for Benchmarking Active Feature Acquisition Authors:Valter Schütz, Han Wu, Reza Rezvan, Linus Aronsson, Morteza Haghir Chehreghani View a PDF of the paper titled AFABench: A Generic Framework for Benchmarking Active Feature Acquisition, by Valter Sch\"utz and 4 other authors View PDF HTML (experimental) Abstract:In many real-world scenarios, acquiring all features of a data instance can be expensive or impractical due to monetary cost, latency, or privacy concerns. Active Feature Acquisition (AFA) addresses this challenge by dynamically selecting a subset of informative features for each data instance, trading predictive performance against acquisition cost. While numerous methods have been proposed for AFA, ranging from myopic information-theoretic strategies to non-myopic reinforcement learning approaches, fair and systematic evaluation of these methods has been hindered by a lack of standardized benchmarks. In this paper, we introduce AFABench, the first benchmark framework for AFA. Our benchmark includes a diverse set of synthetic and real-world datasets, supports a wide range of acquisition policies, and provides a modular design that enables easy integration of new methods and tasks. We implement and evaluate representative algorithms from all major categories, including static, myopic, and reinforc...