[2502.13062] AI-Assisted Decision Making with Human Learning
Summary
This paper explores AI-assisted decision-making, focusing on how algorithms can enhance human learning through feature selection, balancing immediate accuracy with long-term understanding.
Why It Matters
As AI systems increasingly support human decision-making, understanding the dynamics between algorithm recommendations and human learning is crucial. This research provides insights into optimizing feature selection, which can enhance decision accuracy and foster better human understanding in various fields, including healthcare and finance.
Key Takeaways
- AI algorithms can support human decision-making by selecting features for consideration.
- There is a trade-off between immediate accuracy and long-term learning benefits.
- The algorithm's patience and the human's learning ability significantly influence feature selection strategies.
- Optimal feature selection can be computed through a stationary sequence of feature subsets.
- Improved understanding and prediction accuracy can be achieved as the algorithm becomes more patient.
Computer Science > Artificial Intelligence arXiv:2502.13062 (cs) [Submitted on 18 Feb 2025 (v1), last revised 19 Feb 2026 (this version, v2)] Title:AI-Assisted Decision Making with Human Learning Authors:Gali Noti, Kate Donahue, Jon Kleinberg, Sigal Oren View a PDF of the paper titled AI-Assisted Decision Making with Human Learning, by Gali Noti and 3 other authors View PDF HTML (experimental) Abstract:AI systems increasingly support human decision-making. In many cases, despite the algorithm's superior performance, the final decision remains in human hands. For example, an AI may assist doctors in determining which diagnostic tests to run, but the doctor ultimately makes the diagnosis. This paper studies such AI-assisted decision-making settings, where the human learns through repeated interactions with the algorithm. In our framework, the algorithm -- designed to maximize decision accuracy according to its own model -- determines which features the human can consider. The human then makes a prediction based on their own less accurate model. We observe that the discrepancy between the algorithm's model and the human's model creates a fundamental tradeoff: Should the algorithm prioritize recommending more informative features, encouraging the human to learn their importance, even if it results in less accurate predictions in the short term until learning occurs? Or is it preferable to forgo educating the human and instead select features that align more closely with their ...