[2603.21584] SSAM: Singular Subspace Alignment for Merging Multimodal Large Language Models
About this article
Abstract page for arXiv paper 2603.21584: SSAM: Singular Subspace Alignment for Merging Multimodal Large Language Models
Computer Science > Machine Learning arXiv:2603.21584 (cs) [Submitted on 23 Mar 2026] Title:SSAM: Singular Subspace Alignment for Merging Multimodal Large Language Models Authors:Md Kaykobad Reza, Ameya Patil, Edward Ayrapetian, M. Salman Asif View a PDF of the paper titled SSAM: Singular Subspace Alignment for Merging Multimodal Large Language Models, by Md Kaykobad Reza and 3 other authors View PDF HTML (experimental) Abstract:Multimodal large language models (MLLMs) achieve strong performance by jointly processing inputs from multiple modalities, such as vision, audio, and language. However, building such models or extending them to new modalities often requires large paired datasets and substantial computational resources. Since many pretrained MLLMs (e.g., vision-language or audio-language) are publicly available, we ask whether we can merge them into a single MLLM that can handle multiple modalities? Merging MLLMs with different input modalities remains challenging, partly because of differences in the learned representations and interference between their parameter spaces. To address these challenges, we propose Singular Subspace Alignment and Merging (SSAM), a training-free model merging framework that unifies independently trained specialist MLLMs into a single model capable of handling any combination of input modalities. SSAM maintains modality-specific parameter updates separately and identifies a shared low-rank subspace for language-related parameter updates, ...