[2602.20046] Closing the gap in multimodal medical representation alignment
Summary
This paper addresses the modality gap in multimodal medical representation alignment, proposing a framework to enhance alignment between radiology images and clinical text for improved cross-modal retrieval and image captioning.
Why It Matters
The research highlights a critical issue in multimodal learning, particularly in the medical domain, where effective alignment between different data modalities is essential for accurate interpretation and application in clinical settings. By proposing a solution to the modality gap, this work could significantly enhance the utility of AI in healthcare, improving diagnostic processes and patient outcomes.
Key Takeaways
- CLIP-based contrastive losses can lead to modality gaps in multimodal learning.
- The modality gap negatively impacts semantic alignment in medical contexts.
- A new modality-agnostic framework is proposed to improve alignment between radiology images and clinical text.
- Enhanced alignment can lead to better cross-modal retrieval and image captioning.
- This research addresses a previously unresolved issue in complex multimodal settings.
Computer Science > Computer Vision and Pattern Recognition arXiv:2602.20046 (cs) [Submitted on 23 Feb 2026] Title:Closing the gap in multimodal medical representation alignment Authors:Eleonora Grassucci, Giordano Cicchetti, Danilo Comminiello View a PDF of the paper titled Closing the gap in multimodal medical representation alignment, by Eleonora Grassucci and 2 other authors View PDF HTML (experimental) Abstract:In multimodal learning, CLIP has emerged as the de-facto approach for mapping different modalities into a shared latent space by bringing semantically similar representations closer while pushing apart dissimilar ones. However, CLIP-based contrastive losses exhibit unintended behaviors that negatively impact true semantic alignment, leading to sparse and fragmented latent spaces. This phenomenon, known as the modality gap, has been partially mitigated for standard text and image pairs but remains unknown and unresolved in more complex multimodal settings, such as the medical domain. In this work, we study this phenomenon in the latter case, revealing that the modality gap is present also in medical alignment, and we propose a modality-agnostic framework that closes this gap, ensuring that semantically related representations are more aligned, regardless of their source modality. Our method enhances alignment between radiology images and clinical text, improving cross-modal retrieval and image captioning. Comments: Subjects: Computer Vision and Pattern Recognitio...