[2602.18460] The Doctor Will (Still) See You Now: On the Structural Limits of Agentic AI in Healthcare

[2602.18460] The Doctor Will (Still) See You Now: On the Structural Limits of Agentic AI in Healthcare

arXiv - AI 4 min read Article

Summary

This article examines the limitations of agentic AI in healthcare, highlighting the gap between commercial promises and operational realities, and the implications for patient safety.

Why It Matters

Understanding the constraints of agentic AI in healthcare is crucial as these technologies are increasingly integrated into clinical settings. This research sheds light on the challenges of accountability, safety, and the need for a balanced approach to AI implementation in high-stakes environments.

Key Takeaways

  • Agentic AI systems currently require significant human oversight due to safety and regulatory concerns.
  • There is a disconnect between the commercial potential of AI and its practical application in healthcare.
  • Stakeholder interviews reveal tensions in defining 'agentic' AI and its impact on patient safety.

Computer Science > Computers and Society arXiv:2602.18460 (cs) [Submitted on 6 Feb 2026] Title:The Doctor Will (Still) See You Now: On the Structural Limits of Agentic AI in Healthcare Authors:Gabriela Aránguiz Dias, Kiana Jafari, Allie Griffith, Carolina Aránguiz Dias, Grace Ra Kim, Lana Saadeddin, Mykel J. Kochenderfer View a PDF of the paper titled The Doctor Will (Still) See You Now: On the Structural Limits of Agentic AI in Healthcare, by Gabriela Ar\'anguiz Dias and 6 other authors View PDF HTML (experimental) Abstract:Across healthcare, agentic artificial intelligence (AI) systems are increasingly promoted as capable of autonomous action, yet in practice they currently operate under near-total human oversight due to safety, regulatory, and liability constraints that make autonomous clinical reasoning infeasible in high-stakes environments. While market enthusiasm suggests a revolution in healthcare agents, the conceptual assumptions and accountability structures shaping these systems remain underexamined. We present a qualitative study based on interviews with 20 stakeholders, including developers, implementers, and end users. Our analysis identifies three mutually reinforcing tensions: conceptual fragmentation regarding the definition of `agentic'; an autonomy contradiction where commercial promises exceed operational reality; and an evaluation blind spot that prioritizes technical benchmarks over sociotechnical safety. We argue that agentic {AI} functions as a sit...

Related Articles

Robotics

[D] Awesome AI Agent Incidents - A curated list of incidents, attack vectors, failure modes, and defensive tools for autonomous AI agents.

https://github.com/h5i-dev/awesome-ai-agent-incidents submitted by /u/Living_Impression_37 [link] [comments]

Reddit - Machine Learning · 1 min ·
Llms

An attack class that passes every current LLM filter - no payload, no injection signature, no log trace

https://shapingrooms.com/research I published a paper today on something I've been calling postural manipulation. The short version: ordi...

Reddit - Artificial Intelligence · 1 min ·
Llms

[R] An attack class that passes every current LLM filter - no payload, no injection signature, no log trace

https://shapingrooms.com/research I've been documenting what I'm calling postural manipulation: a specific class of language that install...

Reddit - Machine Learning · 1 min ·
[2601.07855] RoAD Benchmark: How LiDAR Models Fail under Coupled Domain Shifts and Label Evolution
Machine Learning

[2601.07855] RoAD Benchmark: How LiDAR Models Fail under Coupled Domain Shifts and Label Evolution

Abstract page for arXiv paper 2601.07855: RoAD Benchmark: How LiDAR Models Fail under Coupled Domain Shifts and Label Evolution

arXiv - AI · 3 min ·
More in Robotics: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime