[2602.12323] The Appeal and Reality of Recycling LoRAs with Adaptive Merging

[2602.12323] The Appeal and Reality of Recycling LoRAs with Adaptive Merging

arXiv - Machine Learning 4 min read Article

Summary

This article explores the effectiveness of adaptive merging methods for recycling LoRA modules in machine learning, revealing limited benefits compared to training new models.

Why It Matters

As the use of fine-tuned LoRA modules grows, understanding their recycling and merging can enhance model performance and efficiency. This research addresses a gap in existing literature by evaluating the practical implications of merging user-contributed LoRAs, which could influence future model development strategies.

Key Takeaways

  • Adaptive merging methods can improve performance but may not significantly outperform training new LoRAs.
  • The choice of LoRAs to merge is less critical than previously thought.
  • Randomly initialized LoRAs can yield similar results to recycled ones, suggesting a regularization effect.
  • Positive transfer is possible with highly relevant LoRAs in the pool.
  • The study provides empirical evidence and releases model checkpoints and code for further research.

Computer Science > Machine Learning arXiv:2602.12323 (cs) [Submitted on 12 Feb 2026] Title:The Appeal and Reality of Recycling LoRAs with Adaptive Merging Authors:Haokun Liu, Gyung Hyun Je, Marco Ciccone, Zhenlin Xu, Prasanth YSS, Colin Raffel View a PDF of the paper titled The Appeal and Reality of Recycling LoRAs with Adaptive Merging, by Haokun Liu and 5 other authors View PDF HTML (experimental) Abstract:The widespread availability of fine-tuned LoRA modules for open pre-trained models has led to an interest in methods that can adaptively merge LoRAs to improve performance. These methods typically include some way of selecting LoRAs from a pool and tune merging coefficients based on a task-specific dataset. While adaptive merging methods have demonstrated improvements in some settings, no past work has attempted to recycle LoRAs found "in the wild" on model repositories like the Hugging Face Hub. To address this gap, we consider recycling from a pool of nearly 1,000 user-contributed LoRAs trained from the Llama 3.1 8B-Instruct language model. Our empirical study includes a range of adaptive and non-adaptive merging methods in addition to a new method designed via a wide search over the methodological design space. We demonstrate that adaptive merging methods can improve performance over the base model but provide limited benefit over training a new LoRA on the same data used to set merging coefficients. We additionally find not only that the specific choice of LoRAs to...

Related Articles

Open Source Ai

[D] Runtime layer on Hugging Face Transformers (no source changes) [D]

I’ve been experimenting with a runtime-layer approach to augmenting existing ML systems without modifying their source code. As a test ca...

Reddit - Machine Learning · 1 min ·
Machine Learning

Can I trick a public AI to spit out an outcome I prefer?

I am aware of an organization that evaluates proposals by feeding them into a public version of AI. Is there a way to make that AI rate m...

Reddit - Artificial Intelligence · 1 min ·
Llms

Curated 550+ free AI tools useful for building projects (LLMs, APIs, local models, RAG, agents)

Over the last few days I was collecting free or low cost AI tools that are actually useful if you want to build stuff, not just try rando...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

Artificial intelligence - Machine Learning, Robotics, Algorithms

AI Events ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime