[2601.16206] Computer Environments Elicit General Agentic Intelligence in LLMs
About this article
Abstract page for arXiv paper 2601.16206: Computer Environments Elicit General Agentic Intelligence in LLMs
Computer Science > Computation and Language arXiv:2601.16206 (cs) [Submitted on 22 Jan 2026 (v1), last revised 8 Apr 2026 (this version, v3)] Title:Computer Environments Elicit General Agentic Intelligence in LLMs Authors:Daixuan Cheng, Shaohan Huang, Yuxian Gu, Huatong Song, Guoxin Chen, Li Dong, Wayne Xin Zhao, Ji-Rong Wen, Furu Wei View a PDF of the paper titled Computer Environments Elicit General Agentic Intelligence in LLMs, by Daixuan Cheng and 8 other authors View PDF HTML (experimental) Abstract:Agentic intelligence in large language models (LLMs) requires not only model intrinsic capabilities but also interactions with external environments. Equipping LLMs with computers now represents a prevailing trend. However, the computer environment's intrinsic value has not been systematically investigated, particularly its potential to elicit general capabilities. Here we introduce LLM-in-Sandbox, which virtualizes the computer as a code sandbox with only basic functionalities, and demonstrate that this minimal setting elicits computer-based meta-capabilities for general task solving: external resource access, file management, and code execution. Without additional training, strong models achieve substantial gains (up to 15.5%) across mathematics, physics, chemistry, biomedicine, long-context understanding, and instruction following, while reducing token consumption by up to 8 times. Furthermore, we develop LLM-in-Sandbox-RL to train models exclusively on non-agentic data...