[2602.17646] Multi-Round Human-AI Collaboration with User-Specified Requirements
Summary
The paper discusses a framework for multi-round human-AI collaboration, emphasizing user-specified requirements to enhance decision quality in high-stakes scenarios.
Why It Matters
As reliance on AI for critical decisions grows, establishing frameworks that ensure AI complements human strengths while minimizing harm is essential. This research provides a structured approach to enhance collaboration dynamics, which is vital for fields such as healthcare and other decision-intensive environments.
Key Takeaways
- Introduces a human-centric framework for AI collaboration.
- Defines counterfactual harm and complementarity to guide AI interactions.
- Presents an online algorithm with guarantees for user-defined constraints.
- Demonstrates effectiveness through medical diagnostics and pictorial reasoning tasks.
- Shows that adjusting constraints can predictably influence human decision accuracy.
Computer Science > Machine Learning arXiv:2602.17646 (cs) [Submitted on 19 Feb 2026] Title:Multi-Round Human-AI Collaboration with User-Specified Requirements Authors:Sima Noorani, Shayan Kiyani, Hamed Hassani, George Pappas View a PDF of the paper titled Multi-Round Human-AI Collaboration with User-Specified Requirements, by Sima Noorani and 3 other authors View PDF HTML (experimental) Abstract:As humans increasingly rely on multiround conversational AI for high stakes decisions, principled frameworks are needed to ensure such interactions reliably improve decision quality. We adopt a human centric view governed by two principles: counterfactual harm, ensuring the AI does not undermine human strengths, and complementarity, ensuring it adds value where the human is prone to err. We formalize these concepts via user defined rules, allowing users to specify exactly what harm and complementarity mean for their specific task. We then introduce an online, distribution free algorithm with finite sample guarantees that enforces the user-specified constraints over the collaboration dynamics. We evaluate our framework across two interactive settings: LLM simulated collaboration on a medical diagnostic task and a human crowdsourcing study on a pictorial reasoning task. We show that our online procedure maintains prescribed counterfactual harm and complementarity violation rates even under nonstationary interaction dynamics. Moreover, tightening or loosening these constraints produce...