[2602.11210] SWE-MiniSandbox: Container-Free Reinforcement Learning for Building Software Engineering Agents
About this article
Abstract page for arXiv paper 2602.11210: SWE-MiniSandbox: Container-Free Reinforcement Learning for Building Software Engineering Agents
Computer Science > Software Engineering arXiv:2602.11210 (cs) [Submitted on 11 Feb 2026 (v1), last revised 2 Mar 2026 (this version, v2)] Title:SWE-MiniSandbox: Container-Free Reinforcement Learning for Building Software Engineering Agents Authors:Danlong Yuan, Wei Wu, Zhengren Wang, Xueliang Zhao, Huishuai Zhang, Dongyan Zhao View a PDF of the paper titled SWE-MiniSandbox: Container-Free Reinforcement Learning for Building Software Engineering Agents, by Danlong Yuan and 5 other authors View PDF HTML (experimental) Abstract:Reinforcement learning (RL) has become a key paradigm for training software engineering (SWE) agents, but existing pipelines typically rely on per-task containers for isolation. At scale, pre-built container images incur substantial storage overhead, slow environment setup, and require container-management privileges. We propose SWE-MiniSandbox, a lightweight, container-free method that enables scalable RL training of SWE agents without sacrificing isolation. Instead of relying on per-instance containers, SWE-MiniSandbox executes each task in an isolated workspace backed by kernel-level mechanisms, substantially reducing system overhead. It leverages lightweight environment pre-caching techniques to eliminate the need for bulky container images. As a result, our approach lowers disk usage to approximately 5\% of that required by container-based pipelines and reduces environment preparation time to about 25\% of the container baseline. Empirical results...