[2602.12323] The Appeal and Reality of Recycling LoRAs with Adaptive Merging
Summary
This article explores the effectiveness of adaptive merging methods for recycling LoRA modules in machine learning, revealing limited benefits compared to training new models.
Why It Matters
As the use of fine-tuned LoRA modules grows, understanding their recycling and merging can enhance model performance and efficiency. This research addresses a gap in existing literature by evaluating the practical implications of merging user-contributed LoRAs, which could influence future model development strategies.
Key Takeaways
- Adaptive merging methods can improve performance but may not significantly outperform training new LoRAs.
- The choice of LoRAs to merge is less critical than previously thought.
- Randomly initialized LoRAs can yield similar results to recycled ones, suggesting a regularization effect.
- Positive transfer is possible with highly relevant LoRAs in the pool.
- The study provides empirical evidence and releases model checkpoints and code for further research.
Computer Science > Machine Learning arXiv:2602.12323 (cs) [Submitted on 12 Feb 2026] Title:The Appeal and Reality of Recycling LoRAs with Adaptive Merging Authors:Haokun Liu, Gyung Hyun Je, Marco Ciccone, Zhenlin Xu, Prasanth YSS, Colin Raffel View a PDF of the paper titled The Appeal and Reality of Recycling LoRAs with Adaptive Merging, by Haokun Liu and 5 other authors View PDF HTML (experimental) Abstract:The widespread availability of fine-tuned LoRA modules for open pre-trained models has led to an interest in methods that can adaptively merge LoRAs to improve performance. These methods typically include some way of selecting LoRAs from a pool and tune merging coefficients based on a task-specific dataset. While adaptive merging methods have demonstrated improvements in some settings, no past work has attempted to recycle LoRAs found "in the wild" on model repositories like the Hugging Face Hub. To address this gap, we consider recycling from a pool of nearly 1,000 user-contributed LoRAs trained from the Llama 3.1 8B-Instruct language model. Our empirical study includes a range of adaptive and non-adaptive merging methods in addition to a new method designed via a wide search over the methodological design space. We demonstrate that adaptive merging methods can improve performance over the base model but provide limited benefit over training a new LoRA on the same data used to set merging coefficients. We additionally find not only that the specific choice of LoRAs to...