[2603.02026] Learning to Read Where to Look: Disease-Aware Vision-Language Pretraining for 3D CT
About this article
Abstract page for arXiv paper 2603.02026: Learning to Read Where to Look: Disease-Aware Vision-Language Pretraining for 3D CT
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.02026 (cs) [Submitted on 2 Mar 2026] Title:Learning to Read Where to Look: Disease-Aware Vision-Language Pretraining for 3D CT Authors:Simon Ging (1 and 2), Philipp Arnold (3), Sebastian Walter (4), Hani Alnahas (1), Hannah Bast (4), Elmar Kotter (3), Jiancheng Yang (5 and 6), Behzad Bozorgtabar (2), Thomas Brox (1) ((1) Computer Vision Group, University of Freiburg, Germany, (2) Adaptive & Agentic AI (A3) Lab, Aarhus University, Denmark, (3) Department of Radiology, Medical Center -- University of Freiburg, Germany, (4) Chair of Algorithms and Data Structures, University of Freiburg, Germany, (5) ELLIS Institute Finland, (6) School of Electrical Engineering, Aalto University, Finland) View a PDF of the paper titled Learning to Read Where to Look: Disease-Aware Vision-Language Pretraining for 3D CT, by Simon Ging (1 and 2) and 23 other authors View PDF HTML (experimental) Abstract:Recent 3D CT vision-language models align volumes with reports via contrastive pretraining, but typically rely on limited public data and provide only coarse global supervision. We train a 3D CT vision-language model on 98k report-volume pairs (50k patients) collected at a single hospital, combined with public datasets, using SigLIP-style contrastive pretraining together with prompt-based disease supervision in the shared vision-text embedding space. On CT-RATE, our model achieves state-of-the-art text-to-image retrieval (R@10 ...