[2604.04373] Decocted Experience Improves Test-Time Inference in LLM Agents
About this article
Abstract page for arXiv paper 2604.04373: Decocted Experience Improves Test-Time Inference in LLM Agents
Computer Science > Artificial Intelligence arXiv:2604.04373 (cs) [Submitted on 6 Apr 2026] Title:Decocted Experience Improves Test-Time Inference in LLM Agents Authors:Maohao Shen, Kaiwen Zha, Zexue He, Zhang-Wei Hong, Siru Ouyang, J. Jon Ryu, Prasanna Sattigeri, Suhas Diggavi, Gregory Wornell View a PDF of the paper titled Decocted Experience Improves Test-Time Inference in LLM Agents, by Maohao Shen and 8 other authors View PDF HTML (experimental) Abstract:There is growing interest in improving LLMs without updating model parameters. One well-established direction is test-time scaling, where increased inference-time computation (e.g., longer reasoning, sampling, or search) is used to improve performance. However, for complex reasoning and agentic tasks, naively scaling test-time compute can substantially increase cost and still lead to wasted budget on suboptimal exploration. In this paper, we explore \emph{context} as a complementary scaling axis for improving LLM performance, and systematically study how to construct better inputs that guide reasoning through \emph{experience}. We show that effective context construction critically depends on \emph{decocted experience}. We present a detailed analysis of experience-augmented agents, studying how to derive context from experience, how performance scales with accumulated experience, what characterizes good context, and which data structures best support context construction. We identify \emph{decocted experience} as a key...