[2602.20104] Align When They Want, Complement When They Need! Human-Centered Ensembles for Adaptive Human-AI Collaboration
Summary
This paper presents a novel human-centered adaptive AI ensemble that balances trust and performance in human-AI collaboration by toggling between aligned and complementary AI models.
Why It Matters
As AI systems increasingly assist in decision-making, understanding how to maintain human trust while enhancing performance is crucial. This research addresses the inherent tension between alignment and complementarity in AI, proposing a solution that could improve human-AI interactions across various applications.
Key Takeaways
- The paper identifies a fundamental tension between trust-building and performance-boosting in AI systems.
- A novel adaptive AI ensemble is proposed, which switches between aligned and complementary models based on context.
- Experiments show that this approach significantly enhances decision-making performance compared to traditional single AI models.
Computer Science > Artificial Intelligence arXiv:2602.20104 (cs) [Submitted on 23 Feb 2026] Title:Align When They Want, Complement When They Need! Human-Centered Ensembles for Adaptive Human-AI Collaboration Authors:Hasan Amin, Ming Yin, Rajiv Khanna View a PDF of the paper titled Align When They Want, Complement When They Need! Human-Centered Ensembles for Adaptive Human-AI Collaboration, by Hasan Amin and 2 other authors View PDF HTML (experimental) Abstract:In human-AI decision making, designing AI that complements human expertise has been a natural strategy to enhance human-AI collaboration, yet it often comes at the cost of decreased AI performance in areas of human strengths. This can inadvertently erode human trust and cause them to ignore AI advice precisely when it is most needed. Conversely, an aligned AI fosters trust yet risks reinforcing suboptimal human behavior and lowering human-AI team performance. In this paper, we start by identifying this fundamental tension between performance-boosting (i.e., complementarity) and trust-building (i.e., alignment) as an inherent limitation of the traditional approach for training a single AI model to assist human decision making. To overcome this, we introduce a novel human-centered adaptive AI ensemble that strategically toggles between two specialist AI models - the aligned model and the complementary model - based on contextual cues, using an elegantly simple yet provably near-optimal Rational Routing Shortcut mechani...