[2603.01632] DeLo: Dual Decomposed Low-Rank Experts Collaboration for Continual Missing Modality Learning
About this article
Abstract page for arXiv paper 2603.01632: DeLo: Dual Decomposed Low-Rank Experts Collaboration for Continual Missing Modality Learning
Computer Science > Machine Learning arXiv:2603.01632 (cs) [Submitted on 2 Mar 2026] Title:DeLo: Dual Decomposed Low-Rank Experts Collaboration for Continual Missing Modality Learning Authors:Xiwei Liu, Yulong Li, Feilong Tang, Imran Razzak View a PDF of the paper titled DeLo: Dual Decomposed Low-Rank Experts Collaboration for Continual Missing Modality Learning, by Xiwei Liu and 3 other authors View PDF HTML (experimental) Abstract:Adapting Large Multimodal Models (LMMs) to real-world scenarios poses the dual challenges of learning from sequential data streams while handling frequent modality incompleteness, a task known as Continual Missing Modality Learning (CMML). However, existing works on CMML have predominantly relied on prompt tuning, a technique that struggles with this task due to cross-task interference between its learnable prompts in their shared embedding space. A naive application of Low-Rank Adaptation (LoRA) with modality-shared module will also suffer modality interference from competing gradients. To this end, we propose DeLo, the first framework to leverage a novel dual-decomposed low-rank expert architecture for CMML. Specifically, this architecture resolves modality interference through decomposed LoRA expert, dynamically composing LoRA update matrix with rank-one factors from disentangled modality-specific factor pools. Embedded within a task-partitioned framework that structurally prevents catastrophic forgetting, this expert system is supported by t...