[2603.04746] Visioning Human-Agentic AI Teaming: Continuity, Tension, and Future Research
About this article
Abstract page for arXiv paper 2603.04746: Visioning Human-Agentic AI Teaming: Continuity, Tension, and Future Research
Computer Science > Artificial Intelligence arXiv:2603.04746 (cs) [Submitted on 5 Mar 2026] Title:Visioning Human-Agentic AI Teaming: Continuity, Tension, and Future Research Authors:Bowen Lou, Tian Lu, T. S. Raghu, Yingjie Zhang View a PDF of the paper titled Visioning Human-Agentic AI Teaming: Continuity, Tension, and Future Research, by Bowen Lou and 3 other authors View PDF HTML (experimental) Abstract:Artificial intelligence is undergoing a structural transformation marked by the rise of agentic systems capable of open-ended action trajectories, generative representations and outputs, and evolving objectives. These properties introduce structural uncertainty into human-AI teaming (HAT), including uncertainty about behavior trajectories, epistemic grounding, and the stability of governing logics over time. Under such conditions, alignment cannot be secured through agreement on bounded outputs; it must be continuously sustained as plans unfold and priorities shift. We advance Team Situation Awareness (Team SA) theory, grounded in shared perception, comprehension, and projection, as an integrative anchor for this transition. While Team SA remains analytically foundational, its stabilizing logic presumes that shared awareness, once achieved, will support coordinated action through iterative updating. Agentic AI challenges this presumption. Our argument unfolds in two stages: first, we extend Team SA to reconceptualize both human and AI awareness under open-ended agency, in...