[2602.18493] Learning to Remember: End-to-End Training of Memory Agents for Long-Context Reasoning
Summary
The paper presents the Unified Memory Agent (UMA), an end-to-end reinforcement learning framework designed for long-context reasoning, enhancing memory management in AI systems.
Why It Matters
This research addresses the limitations of current long-context LLMs and RAG systems by proposing a novel framework that integrates memory operations with question answering, improving dynamic reasoning and learning tasks. The findings could significantly impact how AI systems manage and utilize memory, making them more efficient and reliable.
Key Takeaways
- UMA offers a unified approach to memory management and question answering.
- The framework maintains a dual memory representation for enhanced context handling.
- UMA outperforms existing long-context and RAG systems in dynamic reasoning tasks.
- Introduces Ledger-QA, a benchmark for evaluating long-horizon memory behavior.
- Demonstrates the importance of proactive memory consolidation in AI.
Computer Science > Machine Learning arXiv:2602.18493 (cs) [Submitted on 13 Feb 2026] Title:Learning to Remember: End-to-End Training of Memory Agents for Long-Context Reasoning Authors:Kehao Zhang, Shangtong Gui, Sheng Yang, Wei Chen, Yang Feng View a PDF of the paper titled Learning to Remember: End-to-End Training of Memory Agents for Long-Context Reasoning, by Kehao Zhang and 4 other authors View PDF HTML (experimental) Abstract:Long-context LLMs and Retrieval-Augmented Generation (RAG) systems process information passively, deferring state tracking, contradiction resolution, and evidence aggregation to query time, which becomes brittle under ultra long streams with frequent updates. We propose the Unified Memory Agent (UMA), an end-to-end reinforcement learning framework that unifies memory operations and question answering within a single policy. UMA maintains a dual memory representation: a compact core summary for global context and a structured Memory Bank that supports explicit CRUD (create, update, delete, reorganize) over key value entries, enabling proactive consolidation during streaming. To evaluate long-horizon memory behavior, we introduce Ledger-QA, a diagnostic benchmark for continuous state tracking where answers are latent values derived from accumulated updates rather than lo cal span retrieval. Across 13 datasets spanning Ledger-QA, Test-Time Learning, and Accurate Retrieval, UMA substantially outperforms long-context and RAG baselines on dynamic reas...