[2603.19469] A Framework for Formalizing LLM Agent Security
About this article
Abstract page for arXiv paper 2603.19469: A Framework for Formalizing LLM Agent Security
Computer Science > Cryptography and Security arXiv:2603.19469 (cs) [Submitted on 19 Mar 2026] Title:A Framework for Formalizing LLM Agent Security Authors:Vincent Siu, Jingxuan He, Kyle Montgomery, Zhun Wang, Neil Gong, Chenguang Wang, Dawn Song View a PDF of the paper titled A Framework for Formalizing LLM Agent Security, by Vincent Siu and 6 other authors View PDF HTML (experimental) Abstract:Security in LLM agents is inherently contextual. For example, the same action taken by an agent may represent legitimate behavior or a security violation depending on whose instruction led to the action, what objective is being pursued, and whether the action serves that objective. However, existing definitions of security attacks against LLM agents often fail to capture this contextual nature. As a result, defenses face a fundamental utility-security tradeoff: applying defenses uniformly across all contexts can lead to significant utility loss, while applying defenses in insufficient or inappropriate contexts can result in security vulnerabilities. In this work, we present a framework that systematizes existing attacks and defenses from the perspective of contextual security. To this end, we propose four security properties that capture contextual security for LLM agents: task alignment (pursuing authorized objectives), action alignment (individual actions serving those objectives), source authorization (executing commands from authenticated sources), and data isolation (ensuring i...