[2602.19926] Rethinking LoRA for Privacy-Preserving Federated Learning in Large Models
Summary
The paper presents LA-LoRA, a novel approach for fine-tuning large models in privacy-preserving federated learning, addressing key challenges in performance and privacy.
Why It Matters
As federated learning gains traction for privacy-sensitive applications, improving the performance of large models while maintaining privacy is crucial. This research addresses significant challenges in the field, offering a solution that enhances both model accuracy and privacy compliance, which is vital for real-world applications.
Key Takeaways
- LA-LoRA decouples gradient interactions to enhance model robustness.
- The approach improves performance in privacy-constrained environments.
- Extensive experiments show LA-LoRA outperforms existing methods like RoLoRA.
- The method is applicable to both large vision models and large language models.
- LA-LoRA strengthens convergence guarantees in noisy federated settings.
Computer Science > Machine Learning arXiv:2602.19926 (cs) [Submitted on 23 Feb 2026] Title:Rethinking LoRA for Privacy-Preserving Federated Learning in Large Models Authors:Jin Liu, Yinbin Miao, Ning Xi, Junkang Liu View a PDF of the paper titled Rethinking LoRA for Privacy-Preserving Federated Learning in Large Models, by Jin Liu and 3 other authors View PDF HTML (experimental) Abstract:Fine-tuning large vision models (LVMs) and large language models (LLMs) under differentially private federated learning (DPFL) is hindered by a fundamental privacy-utility trade-off. Low-Rank Adaptation (LoRA), a promising parameter-efficient fine-tuning (PEFT) method, reduces computational and communication costs by introducing two trainable low-rank matrices while freezing pre-trained weights. However, directly applying LoRA in DPFL settings leads to performance degradation, especially in LVMs. Our analysis reveals three previously underexplored challenges: (1) gradient coupling caused by the simultaneous update of two asymmetric low-rank matrices, (2) compounded noise amplification under differential privacy, and (3) sharpness of the global aggregated model in the parameter space. To address these issues, we propose LA-LoRA (\textbf{L}ocal \textbf{A}lternating \textbf{LoRA}), a novel approach that decouples gradient interactions and aligns update directions across clients to enhance robustness under stringent privacy constraints. Theoretically, LA-LoRA strengthens convergence guarantees...