[2603.02277] Quantifying Frontier LLM Capabilities for Container Sandbox Escape
About this article
Abstract page for arXiv paper 2603.02277: Quantifying Frontier LLM Capabilities for Container Sandbox Escape
Computer Science > Cryptography and Security arXiv:2603.02277 (cs) [Submitted on 1 Mar 2026] Title:Quantifying Frontier LLM Capabilities for Container Sandbox Escape Authors:Rahul Marchand, Art O Cathain, Jerome Wynne, Philippos Maximos Giavridis, Sam Deverett, John Wilkinson, Jason Gwartz, Harry Coppock View a PDF of the paper titled Quantifying Frontier LLM Capabilities for Container Sandbox Escape, by Rahul Marchand and 7 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) increasingly act as autonomous agents, using tools to execute code, read and write files, and access networks, creating novel security risks. To mitigate these risks, agents are commonly deployed and evaluated in isolated "sandbox" environments, often implemented using Docker/OCI containers. We introduce SANDBOXESCAPEBENCH, an open benchmark that safely measures an LLM's capacity to break out of these sandboxes. The benchmark is implemented as an Inspect AI Capture the Flag (CTF) evaluation utilising a nested sandbox architecture with the outer layer containing the flag and no known vulnerabilities. Following a threat model of a motivated adversarial agent with shell access inside a container, SANDBOXESCAPEBENCH covers a spectrum of sandboxescape mechanisms spanning misconfiguration, privilege allocation mistakes, kernel flaws, and runtime/orchestration weaknesses. We find that, when vulnerabilities are added, LLMs are able to identify and exploit them, showing that use of...