[2603.19862] IsoCLIP: Decomposing CLIP Projectors for Efficient Intra-modal Alignment
About this article
Abstract page for arXiv paper 2603.19862: IsoCLIP: Decomposing CLIP Projectors for Efficient Intra-modal Alignment
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.19862 (cs) [Submitted on 20 Mar 2026] Title:IsoCLIP: Decomposing CLIP Projectors for Efficient Intra-modal Alignment Authors:Simone Magistri, Dipam Goswami, Marco Mistretta, Bartłomiej Twardowski, Joost van de Weijer, Andrew D. Bagdanov View a PDF of the paper titled IsoCLIP: Decomposing CLIP Projectors for Efficient Intra-modal Alignment, by Simone Magistri and 5 other authors View PDF HTML (experimental) Abstract:Vision-Language Models like CLIP are extensively used for inter-modal tasks which involve both visual and text modalities. However, when the individual modality encoders are applied to inherently intra-modal tasks like image-to-image retrieval, their performance suffers from the intra-modal misalignment. In this paper we study intra-modal misalignment in CLIP with a focus on the role of the projectors that map pre-projection image and text embeddings into the shared embedding space. By analyzing the form of the cosine similarity applied to projected features, and its interaction with the contrastive CLIP loss, we show that there is an inter-modal operator responsible for aligning the two modalities during training, and a second, intra-modal operator that only enforces intra-modal normalization but does nothing to promote intra-modal alignment. Via spectral analysis of the inter-modal operator, we identify an approximately isotropic subspace in which the two modalities are well-aligned, as well...