[2511.02805] MemSearcher: Training LLMs to Reason, Search and Manage Memory via End-to-End Reinforcement Learning
About this article
Abstract page for arXiv paper 2511.02805: MemSearcher: Training LLMs to Reason, Search and Manage Memory via End-to-End Reinforcement Learning
Computer Science > Computation and Language arXiv:2511.02805 (cs) [Submitted on 4 Nov 2025 (v1), last revised 8 May 2026 (this version, v2)] Title:MemSearcher: Training LLMs to Reason, Search and Manage Memory via End-to-End Reinforcement Learning Authors:Qianhao Yuan, Jie Lou, Zichao Li, Jiawei Chen, Yaojie Lu, Hongyu Lin, Le Sun, Debing Zhang, Xianpei Han View a PDF of the paper titled MemSearcher: Training LLMs to Reason, Search and Manage Memory via End-to-End Reinforcement Learning, by Qianhao Yuan and 8 other authors View PDF HTML (experimental) Abstract:LLM-based search agents often concatenate the full interaction history into the context, producing long and noisy inputs, and increasing compute cost and GPU memory overhead. To address this issue, we propose MemSearcher, an agent framework that maintains a compact memory during multi-turn interactions, retaining only question-relevant information and thereby keeping the context length stable across turns. Training MemSearcher is challenging because each trajectory spans multiple turns under different LLM contexts, making each turn an independent optimization target in reinforcement learning. We introduce multi-context GRPO, which propagates trajectory-level advantages to all turns for end-to-end optimization. Experiments demonstrate that MemSearcher outperforms strong history-concatenation (ReAct-style) baselines on a range of public datasets while maintaining nearly constant token counts across multi-turn interacti...