[2602.18460] The Doctor Will (Still) See You Now: On the Structural Limits of Agentic AI in Healthcare
Summary
This article examines the limitations of agentic AI in healthcare, highlighting the gap between commercial promises and operational realities, and the implications for patient safety.
Why It Matters
Understanding the constraints of agentic AI in healthcare is crucial as these technologies are increasingly integrated into clinical settings. This research sheds light on the challenges of accountability, safety, and the need for a balanced approach to AI implementation in high-stakes environments.
Key Takeaways
- Agentic AI systems currently require significant human oversight due to safety and regulatory concerns.
- There is a disconnect between the commercial potential of AI and its practical application in healthcare.
- Stakeholder interviews reveal tensions in defining 'agentic' AI and its impact on patient safety.
Computer Science > Computers and Society arXiv:2602.18460 (cs) [Submitted on 6 Feb 2026] Title:The Doctor Will (Still) See You Now: On the Structural Limits of Agentic AI in Healthcare Authors:Gabriela Aránguiz Dias, Kiana Jafari, Allie Griffith, Carolina Aránguiz Dias, Grace Ra Kim, Lana Saadeddin, Mykel J. Kochenderfer View a PDF of the paper titled The Doctor Will (Still) See You Now: On the Structural Limits of Agentic AI in Healthcare, by Gabriela Ar\'anguiz Dias and 6 other authors View PDF HTML (experimental) Abstract:Across healthcare, agentic artificial intelligence (AI) systems are increasingly promoted as capable of autonomous action, yet in practice they currently operate under near-total human oversight due to safety, regulatory, and liability constraints that make autonomous clinical reasoning infeasible in high-stakes environments. While market enthusiasm suggests a revolution in healthcare agents, the conceptual assumptions and accountability structures shaping these systems remain underexamined. We present a qualitative study based on interviews with 20 stakeholders, including developers, implementers, and end users. Our analysis identifies three mutually reinforcing tensions: conceptual fragmentation regarding the definition of `agentic'; an autonomy contradiction where commercial promises exceed operational reality; and an evaluation blind spot that prioritizes technical benchmarks over sociotechnical safety. We argue that agentic {AI} functions as a sit...