[2602.21889] 2-Step Agent: A Framework for the Interaction of a Decision Maker with AI Decision Support
Summary
The paper presents the 2-Step Agent framework, which models the interaction between decision makers and AI decision support systems, highlighting potential pitfalls in AI-assisted decision making.
Why It Matters
As AI technologies increasingly support human decision making, understanding their effects is crucial. This framework addresses the risks of misaligned beliefs in AI predictions, emphasizing the need for proper training and documentation to enhance decision outcomes.
Key Takeaways
- The 2-Step Agent framework uses Bayesian methods to model AI decision support.
- Misaligned prior beliefs can lead to worse outcomes than no decision support.
- Proper training and documentation are essential for effective AI integration.
Computer Science > Artificial Intelligence arXiv:2602.21889 (cs) [Submitted on 25 Feb 2026] Title:2-Step Agent: A Framework for the Interaction of a Decision Maker with AI Decision Support Authors:Otto Nyberg, Fausto Carcassi, Giovanni Cinà View a PDF of the paper titled 2-Step Agent: A Framework for the Interaction of a Decision Maker with AI Decision Support, by Otto Nyberg and 2 other authors View PDF HTML (experimental) Abstract:Across a growing number of fields, human decision making is supported by predictions from AI models. However, we still lack a deep understanding of the effects of adoption of these technologies. In this paper, we introduce a general computational framework, the 2-Step Agent, which models the effects of AI-assisted decision making. Our framework uses Bayesian methods for causal inference to model 1) how a prediction on a new observation affects the beliefs of a rational Bayesian agent, and 2) how this change in beliefs affects the downstream decision and subsequent outcome. Using this framework, we show by simulations how a single misaligned prior belief can be sufficient for decision support to result in worse downstream outcomes compared to no decision support. Our results reveal several potential pitfalls of AI-driven decision support and highlight the need for thorough model documentation and proper user training. Comments: Subjects: Artificial Intelligence (cs.AI); Machine Learning (cs.LG) Cite as: arXiv:2602.21889 [cs.AI] (or arXiv:2602....