[2603.01097] Understanding LoRA as Knowledge Memory: An Empirical Analysis
About this article
Abstract page for arXiv paper 2603.01097: Understanding LoRA as Knowledge Memory: An Empirical Analysis
Computer Science > Machine Learning arXiv:2603.01097 (cs) [Submitted on 1 Mar 2026] Title:Understanding LoRA as Knowledge Memory: An Empirical Analysis Authors:Seungju Back, Dongwoo Lee, Naun Kang, Taehee Lee, S. K. Hong, Youngjune Gwon, Sungjin Ahn View a PDF of the paper titled Understanding LoRA as Knowledge Memory: An Empirical Analysis, by Seungju Back and 6 other authors View PDF HTML (experimental) Abstract:Continuous knowledge updating for pre-trained large language models (LLMs) is increasingly necessary yet remains challenging. Although inference-time methods like In-Context Learning (ICL) and Retrieval-Augmented Generation (RAG) are popular, they face constraints in context budgets, costs, and retrieval fragmentation. Departing from these context-dependent paradigms, this work investigates a parametric approach using Low-Rank Adaptation (LoRA) as a modular knowledge memory. Although few recent works examine this concept, the fundamental mechanics governing its capacity and composability remain largely unexplored. We bridge this gap through the first systematic empirical study mapping the design space of LoRA-based memory, ranging from characterizing storage capacity and optimizing internalization to scaling multi-module systems and evaluating long-context reasoning. Rather than proposing a single architecture, we provide practical guidance on the operational boundaries of LoRA memory. Overall, our findings position LoRA as the complementary axis of memory alongs...