[2602.15865] AI as Teammate or Tool? A Review of Human-AI Interaction in Decision Support
Summary
This article reviews the role of AI in decision support, analyzing whether AI systems act as tools or collaborative teammates. It highlights the importance of interaction design, trust calibration, and the need for adaptive AI systems.
Why It Matters
Understanding the dynamics of Human-AI interaction is crucial as AI systems become more integrated into decision-making processes. This review provides insights into how to enhance AI effectiveness by addressing design and trust issues, which is vital for industries relying on AI for critical decisions.
Key Takeaways
- AI systems often function passively due to overemphasis on explainability.
- Effective Human-AI interaction requires adaptive, context-aware designs.
- Trust calibration is essential for improving decision-making with AI.
- Collaborative frameworks can enhance the role of AI as a teammate.
- Current AI designs must evolve to support shared mental models.
Computer Science > Human-Computer Interaction arXiv:2602.15865 (cs) [Submitted on 26 Jan 2026] Title:AI as Teammate or Tool? A Review of Human-AI Interaction in Decision Support Authors:Most. Sharmin Sultana Samu, Nafisa Khan, Kazi Toufique Elahi, Tasnuva Binte Rahman, Md. Rakibul Islam, Farig Sadeque View a PDF of the paper titled AI as Teammate or Tool? A Review of Human-AI Interaction in Decision Support, by Most. Sharmin Sultana Samu and 5 other authors View PDF HTML (experimental) Abstract:The integration of Artificial Intelligence (AI) necessitates determining whether systems function as tools or collaborative teammates. In this study, by synthesizing Human-AI Interaction (HAI) literature, we analyze this distinction across four dimensions: interaction design, trust calibration, collaborative frameworks and healthcare applications. Our analysis reveals that static interfaces and miscalibrated trust limit AI efficacy. Performance hinges on aligning transparency with cognitive workflows, yet a fluency trap often inflates trust without improving decision-making. Consequently, an overemphasis on explainability leaves systems largely passive. Our findings show that current AI systems remain largely passive due to an overreliance on explainability-centric designs and that transitioning AI to an active teammate requires adaptive, context-aware interactions that support shared mental models and the dynamic negotiation of authority between humans and AI. Comments: Subjects: H...