[2602.23638] FedRot-LoRA: Mitigating Rotational Misalignment in Federated LoRA
About this article
Abstract page for arXiv paper 2602.23638: FedRot-LoRA: Mitigating Rotational Misalignment in Federated LoRA
Computer Science > Machine Learning arXiv:2602.23638 (cs) [Submitted on 27 Feb 2026] Title:FedRot-LoRA: Mitigating Rotational Misalignment in Federated LoRA Authors:Haoran Zhang, Dongjun Kim, Seohyeon Cha, Haris Vikalo View a PDF of the paper titled FedRot-LoRA: Mitigating Rotational Misalignment in Federated LoRA, by Haoran Zhang and 3 other authors View PDF HTML (experimental) Abstract:Federated LoRA provides a communication-efficient mechanism for fine-tuning large language models on decentralized data. In practice, however, a discrepancy between the factor-wise averaging used to preserve low rank and the mathematically correct aggregation of local updates can cause significant aggregation error and unstable training. We argue that a major source of this problem is rotational misalignment, arising from the rotational invariance of low-rank factorizations -- semantically equivalent updates can be represented in different latent subspaces across clients since $(B_i R_i)(R_i^\top A_i) = B_i A_i$. When such misaligned factors are averaged directly, they interfere destructively and degrade the global update. To address this issue, we propose FedRot-LoRA, a federated LoRA framework that aligns client updates via orthogonal transformations prior to aggregation. This alignment preserves the semantic update while reducing cross-client subspace mismatch, without increasing communication cost or restricting model expressivity. We provide a convergence analysis that examines the ag...