[2603.20860] Restoring Neural Network Plasticity for Faster Transfer Learning
About this article
Abstract page for arXiv paper 2603.20860: Restoring Neural Network Plasticity for Faster Transfer Learning
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.20860 (cs) [Submitted on 21 Mar 2026] Title:Restoring Neural Network Plasticity for Faster Transfer Learning Authors:Xander Coetzer, Arné Schreuder, Anna Sergeevna Bosman View a PDF of the paper titled Restoring Neural Network Plasticity for Faster Transfer Learning, by Xander Coetzer and 1 other authors View PDF HTML (experimental) Abstract:Transfer learning with models pretrained on ImageNet has become a standard practice in computer vision. Transfer learning refers to fine-tuning pretrained weights of a neural network on a downstream task, typically unrelated to ImageNet. However, pretrained weights can become saturated and may yield insignificant gradients, failing to adapt to the downstream task. This hinders the ability of the model to train effectively, and is commonly referred to as loss of neural plasticity. Loss of plasticity may prevent the model from fully adapting to the target domain, especially when the downstream dataset is atypical in nature. While this issue has been widely explored in continual learning, it remains relatively understudied in the context of transfer learning. In this work, we propose the use of a targeted weight re-initialization strategy to restore neural plasticity prior to fine-tuning. Our experiments show that both convolutional neural networks (CNNs) and vision transformers (ViTs) benefit from this approach, yielding higher test accuracy with faster convergence on ...