[2603.03379] MemSifter: Offloading LLM Memory Retrieval via Outcome-Driven Proxy Reasoning
About this article
Abstract page for arXiv paper 2603.03379: MemSifter: Offloading LLM Memory Retrieval via Outcome-Driven Proxy Reasoning
Computer Science > Information Retrieval arXiv:2603.03379 (cs) [Submitted on 3 Mar 2026] Title:MemSifter: Offloading LLM Memory Retrieval via Outcome-Driven Proxy Reasoning Authors:Jiejun Tan, Zhicheng Dou, Liancheng Zhang, Yuyang Hu, Yiruo Cheng, Ji-Rong Wen View a PDF of the paper titled MemSifter: Offloading LLM Memory Retrieval via Outcome-Driven Proxy Reasoning, by Jiejun Tan and 5 other authors View PDF HTML (experimental) Abstract:As Large Language Models (LLMs) are increasingly used for long-duration tasks, maintaining effective long-term memory has become a critical challenge. Current methods often face a trade-off between cost and accuracy. Simple storage methods often fail to retrieve relevant information, while complex indexing methods (such as memory graphs) require heavy computation and can cause information loss. Furthermore, relying on the working LLM to process all memories is computationally expensive and slow. To address these limitations, we propose MemSifter, a novel framework that offloads the memory retrieval process to a small-scale proxy model. Instead of increasing the burden on the primary working LLM, MemSifter uses a smaller model to reason about the task before retrieving the necessary information. This approach requires no heavy computation during the indexing phase and adds minimal overhead during inference. To optimize the proxy model, we introduce a memory-specific Reinforcement Learning (RL) training paradigm. We design a task-outcome-ori...