[2503.12016] A Survey on Federated Fine-tuning of Large Language Models
Summary
This survey explores the integration of Federated Learning with Large Language Models (LLMs), addressing challenges and methodologies for effective fine-tuning while ensuring data privacy.
Why It Matters
As LLMs become increasingly prevalent, the need for privacy-preserving techniques like Federated Learning is critical. This survey provides a comprehensive overview of the current state of Federated Fine-tuning, highlighting its importance for researchers and practitioners in AI development.
Key Takeaways
- Federated Learning (FL) enhances LLMs by enabling collaborative model adaptation without compromising data privacy.
- The survey reviews historical developments and current challenges in the FedLLM landscape.
- Parameter-Efficient Fine-tuning (PEFT) methods are crucial for effective deployment of FedLLM.
- The paper outlines real-world applications of FedLLM across various domains.
- Identifies open challenges and future research directions in federated fine-tuning.
Computer Science > Machine Learning arXiv:2503.12016 (cs) [Submitted on 15 Mar 2025 (v1), last revised 24 Feb 2026 (this version, v3)] Title:A Survey on Federated Fine-tuning of Large Language Models Authors:Yebo Wu, Chunlin Tian, Jingguang Li, He Sun, Kahou Tam, Zhanting Zhou, Haicheng Liao, Jing Xiong, Zhijiang Guo, Li Li, Chengzhong Xu View a PDF of the paper titled A Survey on Federated Fine-tuning of Large Language Models, by Yebo Wu and 10 other authors View PDF HTML (experimental) Abstract:Large Language Models (LLMs) have demonstrated impressive success across various tasks. Integrating LLMs with Federated Learning (FL), a paradigm known as FedLLM, offers a promising avenue for collaborative model adaptation while preserving data privacy. This survey provides a systematic and comprehensive review of FedLLM. We begin by tracing the historical development of both LLMs and FL, summarizing relevant prior research to set the context. Subsequently, we delve into an in-depth analysis of the fundamental challenges inherent in deploying FedLLM. Addressing these challenges often requires efficient adaptation strategies; therefore, we conduct an extensive examination of existing Parameter-Efficient Fine-tuning (PEFT) methods and explore their applicability within the FL framework. To rigorously evaluate the performance of FedLLM, we undertake a thorough review of existing fine-tuning datasets and evaluation benchmarks. Furthermore, we discuss FedLLM's diverse real-world appli...