[2603.00026] ActMem: Bridging the Gap Between Memory Retrieval and Reasoning in LLM Agents
About this article
Abstract page for arXiv paper 2603.00026: ActMem: Bridging the Gap Between Memory Retrieval and Reasoning in LLM Agents
Computer Science > Computation and Language arXiv:2603.00026 (cs) [Submitted on 4 Feb 2026] Title:ActMem: Bridging the Gap Between Memory Retrieval and Reasoning in LLM Agents Authors:Xiaohui Zhang, Zequn Sun, Chengyuan Yang, Yaqin Jin, Yazhong Zhang, Wei Hu View a PDF of the paper titled ActMem: Bridging the Gap Between Memory Retrieval and Reasoning in LLM Agents, by Xiaohui Zhang and Zequn Sun and Chengyuan Yang and Yaqin Jin and Yazhong Zhang and Wei Hu View PDF HTML (experimental) Abstract:Effective memory management is essential for large language model (LLM) agents handling long-term interactions. Current memory frameworks typically treat agents as passive "recorders" and retrieve information without understanding its deeper implications. They may fail in scenarios requiring conflict detection and complex decision-making. To bridge this critical gap, we propose a novel actionable memory framework called ActMem that integrates memory retrieval with active causal reasoning. ActMem transforms unstructured dialogue history into a structured causal and semantic graph. By leveraging counterfactual reasoning and commonsense completion, it enables agents to deduce implicit constraints and resolve potential conflicts between past states and current intentions. Furthermore, we introduce a comprehensive dataset ActMemEval to evaluate agent reasoning capabilities in logic-driven scenarios, moving beyond the fact-retrieval focus of existing memory benchmarks. Experiments demonst...