[2603.12230] Security Considerations for Artificial Intelligence Agents
About this article
Abstract page for arXiv paper 2603.12230: Security Considerations for Artificial Intelligence Agents
Computer Science > Machine Learning arXiv:2603.12230 (cs) [Submitted on 12 Mar 2026 (v1), last revised 5 Apr 2026 (this version, v2)] Title:Security Considerations for Artificial Intelligence Agents Authors:Ninghui Li, Kaiyuan Zhang, Kyle Polley, Jerry Ma View a PDF of the paper titled Security Considerations for Artificial Intelligence Agents, by Ninghui Li and 3 other authors View PDF HTML (experimental) Abstract:This article, a lightly adapted version of Perplexity's response to NIST/CAISI Request for Information 2025-0035, details our observations and recommendations concerning the security of frontier AI agents. These insights are informed by Perplexity's experience operating general-purpose agentic systems used by millions of users and thousands of enterprises in both controlled and open-world environments. Agent architectures change core assumptions around code-data separation, authority boundaries, and execution predictability, creating new confidentiality, integrity, and availability failure modes. We map principal attack surfaces across tools, connectors, hosting boundaries, and multi-agent coordination, with particular emphasis on indirect prompt injection, confused-deputy behavior, and cascading failures in long-running workflows. We then assess current defenses as a layered stack: input-level and model-level mitigations, sandboxed execution, and deterministic policy enforcement for high-consequence actions. Finally, we identify standards and research gaps, inc...