[2603.01761] Modular Memory is the Key to Continual Learning Agents
About this article
Abstract page for arXiv paper 2603.01761: Modular Memory is the Key to Continual Learning Agents
Computer Science > Machine Learning arXiv:2603.01761 (cs) [Submitted on 2 Mar 2026] Title:Modular Memory is the Key to Continual Learning Agents Authors:Vaggelis Dorovatas, Malte Schwerin, Andrew D. Bagdanov, Lucas Caccia, Antonio Carta, Laurent Charlin, Barbara Hammer, Tyler L. Hayes, Timm Hess, Christopher Kanan, Dhireesha Kudithipudi, Xialei Liu, Vincenzo Lomonaco, Jorge Mendez-Mendez, Darshan Patil, Ameya Prabhu, Elisa Ricci, Tinne Tuytelaars, Gido M. van de Ven, Liyuan Wang, Joost van de Weijer, Jonghyun Choi, Martin Mundt, Rahaf Aljundi View a PDF of the paper titled Modular Memory is the Key to Continual Learning Agents, by Vaggelis Dorovatas and 23 other authors View PDF HTML (experimental) Abstract:Foundation models have transformed machine learning through large-scale pretraining and increased test-time compute. Despite surpassing human performance in several domains, these models remain fundamentally limited in continuous operation, experience accumulation, and personalization, capabilities that are central to adaptive intelligence. While continual learning research has long targeted these goals, its historical focus on in-weight learning (IWL), i.e., updating a single model's parameters to absorb new knowledge, has rendered catastrophic forgetting a persistent challenge. Our position is that combining the strengths of In-Weight Learning (IWL) and the newly emerged capabilities of In-Context Learning (ICL) through the design of modular memory is the missing piec...