[2603.01526] Scalable Multi-Task Low-Rank Model Adaptation
About this article
Abstract page for arXiv paper 2603.01526: Scalable Multi-Task Low-Rank Model Adaptation
Computer Science > Machine Learning arXiv:2603.01526 (cs) [Submitted on 2 Mar 2026] Title:Scalable Multi-Task Low-Rank Model Adaptation Authors:Zichen Tian, Antoine Ledent, Qianru Sun View a PDF of the paper titled Scalable Multi-Task Low-Rank Model Adaptation, by Zichen Tian and 2 other authors View PDF HTML (experimental) Abstract:Scaling multi-task low-rank adaptation (LoRA) to a large number of tasks induces catastrophic performance degradation, such as an accuracy drop from 88.2% to 2.0% on DOTA when scaling from 5 to 15 tasks. This failure is due to parameter and representation misalignment. We find that existing solutions, like regularization and dynamic routing, fail at scale because they are constrained by a fundamental trade-off: strengthening regularization to reduce inter-task conflict inadvertently suppresses the essential feature discrimination required for effective routing. In this work, we identify two root causes for this trade-off. First, uniform regularization disrupts inter-task knowledge sharing: shared underlying knowledge concentrates in high-SV components (89% alignment on Flanv2->BBH). Uniform regularization forces high-SV components to update in orthogonal directions, directly disrupting the shared knowledge. Second, Conflict Amplification: Applying LoRA at the component-level (e.g., W_q, W_v) amplifies gradient conflicts; we show block-level adaptation reduces this conflict by 76% with only 50% parameters. Based on these insights, we propose mtL...