[2412.06531] Unraveling the Complexity of Memory in RL Agents: an Approach for Classification and Evaluation
About this article
Abstract page for arXiv paper 2412.06531: Unraveling the Complexity of Memory in RL Agents: an Approach for Classification and Evaluation
Computer Science > Machine Learning arXiv:2412.06531 (cs) [Submitted on 9 Dec 2024 (v1), last revised 4 Mar 2026 (this version, v2)] Title:Unraveling the Complexity of Memory in RL Agents: an Approach for Classification and Evaluation Authors:Egor Cherepanov, Nikita Kachaev, Artem Zholus, Alexey K. Kovalev, Aleksandr I. Panov View a PDF of the paper titled Unraveling the Complexity of Memory in RL Agents: an Approach for Classification and Evaluation, by Egor Cherepanov and 4 other authors View PDF HTML (experimental) Abstract:The incorporation of memory into agents is essential for numerous tasks within the domain of Reinforcement Learning (RL). In particular, memory is paramount for tasks that require the use of past information, adaptation to novel environments, and improved sample efficiency. However, the term "memory" encompasses a wide range of concepts, which, coupled with the lack of a unified methodology for validating an agent's memory, leads to erroneous judgments about agents' memory capabilities and prevents objective comparison with other memory-enhanced agents. This paper aims to streamline the concept of memory in RL by providing practical precise definitions of agent memory types, such as long-term vs. short-term memory and declarative vs. procedural memory, inspired by cognitive science. Using these definitions, we categorize different classes of agent memory, propose a robust experimental methodology for evaluating the memory capabilities of RL agents, a...