[2603.24639] Experiential Reflective Learning for Self-Improving LLM Agents
About this article
Abstract page for arXiv paper 2603.24639: Experiential Reflective Learning for Self-Improving LLM Agents
Computer Science > Machine Learning arXiv:2603.24639 (cs) [Submitted on 25 Mar 2026] Title:Experiential Reflective Learning for Self-Improving LLM Agents Authors:Marc-Antoine Allard, Arnaud Teinturier, Victor Xing, Gautier Viaud View a PDF of the paper titled Experiential Reflective Learning for Self-Improving LLM Agents, by Marc-Antoine Allard and 3 other authors View PDF HTML (experimental) Abstract:Recent advances in large language models (LLMs) have enabled the development of autonomous agents capable of complex reasoning and multi-step problem solving. However, these agents struggle to adapt to specialized environments and do not leverage past interactions, approaching each new task from scratch regardless of their accumulated experience. We introduce Experiential Reflective Learning (ERL), a simple self-improvement framework that enables rapid environment adaptation through experiential learning. ERL reflects on task trajectories and outcomes to generate heuristics, capturing actionable lessons that transfer across tasks. At test time, relevant heuristics are retrieved based on the current task and injected into the agent's context to guide execution. On the Gaia2 benchmark, ERL improves success rate by 7.8% over a ReAct baseline, with large gains in task completion reliability, and outperforms prior experiential learning methods. Through systematic ablations, we find that selective retrieval is essential and that heuristics provide more transferable abstractions tha...