[2603.22868] Agent-Sentry: Bounding LLM Agents via Execution Provenance
About this article
Abstract page for arXiv paper 2603.22868: Agent-Sentry: Bounding LLM Agents via Execution Provenance
Computer Science > Cryptography and Security arXiv:2603.22868 (cs) [Submitted on 24 Mar 2026] Title:Agent-Sentry: Bounding LLM Agents via Execution Provenance Authors:Rohan Sequeira, Stavros Damianakis, Umar Iqbal, Konstantinos Psounis View a PDF of the paper titled Agent-Sentry: Bounding LLM Agents via Execution Provenance, by Rohan Sequeira and 3 other authors View PDF HTML (experimental) Abstract:Agentic computing systems, which autonomously spawn new functionalities based on natural language instructions, are becoming increasingly prevalent. While immensely capable, these systems raise serious security, privacy, and safety concerns. Fundamentally, the full set of functionalities offered by these systems, combined with their probabilistic execution flows, is not known beforehand. Given this lack of characterization, it is non-trivial to validate whether a system has successfully carried out the user's intended task or instead executed irrelevant actions, potentially as a consequence of compromise. In this paper, we propose Agent-Sentry, a framework that attempts to bound agentic systems to address this problem. Our key insight is that agentic systems are designed for specific use cases and therefore need not expose unbounded or unspecified functionalities. Once bounded, these systems become easier to scrutinize. Agent-Sentry operationalizes this insight by uncovering frequent functionalities offered by an agentic system, along with their execution traces, to construct b...