[2411.00623] Replay-Free Continual Low-Rank Adaptation with Dynamic Memory
About this article
Abstract page for arXiv paper 2411.00623: Replay-Free Continual Low-Rank Adaptation with Dynamic Memory
Computer Science > Computer Vision and Pattern Recognition arXiv:2411.00623 (cs) [Submitted on 1 Nov 2024 (v1), last revised 24 Mar 2026 (this version, v4)] Title:Replay-Free Continual Low-Rank Adaptation with Dynamic Memory Authors:Huancheng Chen, Jingtao Li, Weiming Zhuang, Chen Chen, Lingjuan Lyu View a PDF of the paper titled Replay-Free Continual Low-Rank Adaptation with Dynamic Memory, by Huancheng Chen and Jingtao Li and Weiming Zhuang and Chen Chen and Lingjuan Lyu View PDF HTML (experimental) Abstract:We revisit continual learning~(CL), which enables pre-trained vision transformers (ViTs) to sequentially fine-tune on new downstream tasks over time. However, as the scale of these models increases, catastrophic forgetting remains a more serious challenge. Recent studies highlight a crossover between CL techniques and parameter-efficient fine-tuning (PEFT), which focuses on fine-tuning only a small set of trainable parameters to adapt to downstream tasks, such as low-rank adaptation (LoRA). While LoRA achieves faster convergence and requires fewer trainable parameters, it has seldom been explored in the context of continual learning. To address this gap, we propose a novel PEFT-CL method called Dual Low-Rank Adaptation (DualLoRA), which introduces both an orthogonal LoRA adapter and a residual LoRA adapter parallel to pre-trained weights in each layer. These components are orchestrated by a dynamic memory mechanism to strike a balance between stability and plasticity...