[2603.01209] Agents Learn Their Runtime: Interpreter Persistence as Training-Time Semantics
About this article
Abstract page for arXiv paper 2603.01209: Agents Learn Their Runtime: Interpreter Persistence as Training-Time Semantics
Computer Science > Artificial Intelligence arXiv:2603.01209 (cs) [Submitted on 1 Mar 2026] Title:Agents Learn Their Runtime: Interpreter Persistence as Training-Time Semantics Authors:Victor May, Aaditya Salgarkar, Yishan Wang, Diganta Misra, Huu Nguyen View a PDF of the paper titled Agents Learn Their Runtime: Interpreter Persistence as Training-Time Semantics, by Victor May and 4 other authors View PDF HTML (experimental) Abstract:Tool-augmented LLMs are increasingly deployed as agents that interleave natural-language reasoning with executable Python actions, as in CodeAct-style frameworks. In deployment, these agents rely on runtime state that persists across steps. By contrast, common training pipelines treat agent traces as token sequences, with execution semantics left implicit. This raises a data-centric question: Is state persistence merely an inference-time scaffold, or can models learn to exploit it when training data exposes the corresponding execution semantics? We isolate state persistence as a training-time variable. We introduce Opaque Knapsack, a procedurally generated family of partially observable optimization tasks designed to prevent one-shot solutions. Item attributes and constraints are hidden behind budgeted tool calls, forcing multi-turn control flow and iterative state revision. Holding task instances, prompts, tools, model, and supervision fixed, we generate paired trajectories differing only in whether interpreter state persists across steps or r...