[2603.26317] Label-Free Cross-Task LoRA Merging with Null-Space Compression
About this article
Abstract page for arXiv paper 2603.26317: Label-Free Cross-Task LoRA Merging with Null-Space Compression
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.26317 (cs) [Submitted on 27 Mar 2026] Title:Label-Free Cross-Task LoRA Merging with Null-Space Compression Authors:Wonyoung Lee, Wooseong Jeong, Kuk-Jin Yoon View a PDF of the paper titled Label-Free Cross-Task LoRA Merging with Null-Space Compression, by Wonyoung Lee and 2 other authors View PDF HTML (experimental) Abstract:Model merging combines independently fine-tuned checkpoints without joint multi-task training. In the era of foundation-model, fine-tuning with Low-Rank Adaptation (LoRA) is prevalent, making LoRA merging a promising target. Existing approaches can work in homogeneous settings where all target tasks are classification but often fail when tasks span classification and regression. Approaches using entropy-based surrogates do not apply to regression and are costly for large language models due to long token sequences. We introduce Null-Space Compression (NSC) Merging, a label-free, output-agnostic method that sets merge weights from adapter geometry. Our key observation is that during LoRA finetuning the down-projection factor $A$ in $\Delta W = BA$ compresses its null space, and the compression correlates with performance. NSC uses this as an optimization signal for merging that can generalize across classification, regression, and sequence generation. NSC achieves state-of-the-art performance across twenty heterogeneous vision tasks with balanced gains where prior methods overfit subs...