[2602.07026] Modality Gap-Driven Subspace Alignment Training Paradigm For Multimodal Large Language Models
About this article
Abstract page for arXiv paper 2602.07026: Modality Gap-Driven Subspace Alignment Training Paradigm For Multimodal Large Language Models
Computer Science > Computer Vision and Pattern Recognition arXiv:2602.07026 (cs) [Submitted on 2 Feb 2026 (v1), last revised 8 May 2026 (this version, v2)] Title:Modality Gap-Driven Subspace Alignment Training Paradigm For Multimodal Large Language Models Authors:Xiaomin Yu, Yi Xin, Yuhui Zhang, Wenjie Zhang, Chonghan Liu, Hanzhen Zhao, Chen Liu, Xiaoxing Hu, Ziyue Qiao, Hao Tang, Xiaobin Hu, Chengwei Qin, Hui Xiong, Yu Qiao, Shuicheng Yan View a PDF of the paper titled Modality Gap-Driven Subspace Alignment Training Paradigm For Multimodal Large Language Models, by Xiaomin Yu and 14 other authors View PDF HTML (experimental) Abstract:Despite the success of multimodal contrastive learning in aligning visual and linguistic representations, a persistent geometric anomaly, the Modality Gap, remains: embeddings of distinct modalities expressing identical semantics occupy systematically offset regions. Prior approaches to bridge this gap are largely limited by oversimplified isotropic assumptions, hindering their application in large-scale scenarios. In this paper, we address these limitations by precisely characterizing the geometric shape of the modality gap and leveraging it for efficient model scaling. First, we propose the Fixed-frame Modality Gap Theory, which decomposes the modality gap within a frozen reference frame into stable biases and anisotropic residuals. Guided by this precise modeling, we introduce ReAlign, a training-free modality alignment strategy. Utilizing...