[2603.03515] The Controllability Trap: A Governance Framework for Military AI Agents
About this article
Abstract page for arXiv paper 2603.03515: The Controllability Trap: A Governance Framework for Military AI Agents
Computer Science > Computers and Society arXiv:2603.03515 (cs) [Submitted on 3 Mar 2026] Title:The Controllability Trap: A Governance Framework for Military AI Agents Authors:Subramanyam Sahoo View a PDF of the paper titled The Controllability Trap: A Governance Framework for Military AI Agents, by Subramanyam Sahoo View PDF HTML (experimental) Abstract:Agentic AI systems - capable of goal interpretation, world modeling, planning, tool use, long-horizon operation, and autonomous coordination - introduce distinct control failures not addressed by existing safety frameworks. We identify six agentic governance failures tied to these capabilities and show how they erode meaningful human control in military settings. We propose the Agentic Military AI Governance Framework (AMAGF), a measurable architecture structured around three pillars: Preventive Governance (reducing failure likelihood), Detective Governance (real-time detection of control degradation), and Corrective Governance (restoring or safely degrading operations). Its core mechanism, the Control Quality Score (CQS), is a composite real-time metric quantifying human control and enabling graduated responses as control weakens. For each failure type, we define concrete mechanisms, assign responsibilities across five institutional actors, and formalize evaluation metrics. A worked operational scenario illustrates implementation, and we situate the framework within established agent safety literature. We argue that govern...