[2505.18502] Knowledge Fusion of Large Language Models Via Modular SkillPacks
Summary
The paper presents GraftLLM, a novel method for knowledge fusion in large language models using modular SkillPacks, enhancing cross-capability transfer and continual learning.
Why It Matters
As large language models (LLMs) become more complex, effective knowledge transfer methods are crucial for improving their adaptability and efficiency. This research addresses limitations in existing techniques and offers a scalable solution for integrating diverse model capabilities, which is essential for advancing AI applications.
Key Takeaways
- GraftLLM introduces SkillPacks for efficient knowledge storage and transfer.
- The method supports forget-free continual learning and model fusion.
- Experiments show GraftLLM outperforms existing knowledge transfer techniques.
Computer Science > Artificial Intelligence arXiv:2505.18502 (cs) [Submitted on 24 May 2025 (v1), last revised 25 Feb 2026 (this version, v2)] Title:Knowledge Fusion of Large Language Models Via Modular SkillPacks Authors:Guodong Du, Zhuo Li, Xuanning Zhou, Junlin Li, Zesheng Shi, Wanyu Lin, Ho-Kin Tang, Xiucheng Li, Fangming Liu, Wenya Wang, Min Zhang, Jing Li View a PDF of the paper titled Knowledge Fusion of Large Language Models Via Modular SkillPacks, by Guodong Du and 11 other authors View PDF Abstract:Cross-capability transfer is a key challenge in large language model (LLM) research, with applications in multi-task integration, model compression, and continual learning. Recent works like FuseLLM and FuseChat have demonstrated the potential of transferring multiple model capabilities to lightweight models, enhancing adaptability and efficiency, which motivates our investigation into more efficient cross-capability transfer methods. However, existing approaches primarily focus on small, homogeneous models, limiting their applicability. For large, heterogeneous models, knowledge distillation with full-parameter fine-tuning often overlooks the student model's intrinsic capacity and risks catastrophic forgetting, while PEFT methods struggle to effectively absorb knowledge from source LLMs. To address these issues, we introduce GraftLLM, a novel method that stores source model capabilities in a target model with SkillPack format. This approach preserves general capabiliti...