[2510.16079] EvolveR: Self-Evolving LLM Agents through an Experience-Driven Lifecycle
About this article
Abstract page for arXiv paper 2510.16079: EvolveR: Self-Evolving LLM Agents through an Experience-Driven Lifecycle
Computer Science > Computation and Language arXiv:2510.16079 (cs) [Submitted on 17 Oct 2025 (v1), last revised 8 May 2026 (this version, v2)] Title:EvolveR: Self-Evolving LLM Agents through an Experience-Driven Lifecycle Authors:Rong Wu, Xiaoman Wang, Jianbiao Mei, Pinlong Cai, Daocheng Fu, Cheng Yang, Licheng Wen, Xuemeng Yang, Yufan Shen, Yuxin Wang, Botian Shi View a PDF of the paper titled EvolveR: Self-Evolving LLM Agents through an Experience-Driven Lifecycle, by Rong Wu and 10 other authors View PDF HTML (experimental) Abstract:Current Large Language Model (LLM) agents show strong performance in tool use, but lack the crucial capability to systematically learn from their own experiences. While existing frameworks mainly focus on mitigating external knowledge gaps, they fail to address a more fundamental limitation: the inability to iteratively refine problem-solving strategies. In this work, we introduce EvolveR, a framework designed to enable agent to self-improve through a complete, closed-loop experience lifecycle. This lifecycle comprises two key stages: (1) Offline Self-Distillation, where the agent's interaction trajectories are synthesized into a structured repository of abstract, reusable strategic principles; (2) Online Interaction, where the agent interacts with tasks and actively retrieves distilled principles to guide its decision-making, accumulating a diverse set of behavioral trajectories. This loop employs a policy reinforcement mechanism to iterativ...