[2505.00333] Two Stage Wireless Federated LoRA Fine-Tuning with Sparsified Orthogonal Updates
About this article
Abstract page for arXiv paper 2505.00333: Two Stage Wireless Federated LoRA Fine-Tuning with Sparsified Orthogonal Updates
Computer Science > Machine Learning arXiv:2505.00333 (cs) [Submitted on 1 May 2025 (v1), last revised 24 Mar 2026 (this version, v2)] Title:Two Stage Wireless Federated LoRA Fine-Tuning with Sparsified Orthogonal Updates Authors:Bumjun Kim, Wan Choi View a PDF of the paper titled Two Stage Wireless Federated LoRA Fine-Tuning with Sparsified Orthogonal Updates, by Bumjun Kim and 1 other authors View PDF HTML (experimental) Abstract:Transformer-based large language models (LLMs) have achieved remarkable success across various tasks. Yet, fine-tuning such massive models in federated learning (FL) settings poses significant challenges due to resource constraints and communication overhead. Low-Rank Adaptation (LoRA) addresses these issues by training compact, low-rank matrices instead of fully fine-tuning large models. This paper introduces a wireless federated LoRA fine-tuning framework that optimizes both learning performance and communication efficiency. We provide a novel convergence analysis, revealing how LoRA rank and covariance effects influence FL training dynamics. Leveraging these insights, we propose Sparsified Orthogonal Fine-Tuning (\textbf{SOFT}), an adaptive sparsification method that streamlines parameter updates without expensive matrix multiplications and singular value decomposition (SVD) operations. Additionally, we present a Two Stage Federated Algorithm (\textbf{TSFA}) algorithm that pre-determines key parameters offline and dynamically adjusts bandwidth...