[2604.01987] Curia-2: Scaling Self-Supervised Learning for Radiology Foundation Models
About this article
Abstract page for arXiv paper 2604.01987: Curia-2: Scaling Self-Supervised Learning for Radiology Foundation Models
Computer Science > Computer Vision and Pattern Recognition arXiv:2604.01987 (cs) [Submitted on 2 Apr 2026] Title:Curia-2: Scaling Self-Supervised Learning for Radiology Foundation Models Authors:Antoine Saporta, Baptiste Callard, Corentin Dancette, Julien Khlaut, Charles Corbière, Leo Butsanets, Amaury Prat, Pierre Manceron View a PDF of the paper titled Curia-2: Scaling Self-Supervised Learning for Radiology Foundation Models, by Antoine Saporta and Baptiste Callard and Corentin Dancette and Julien Khlaut and Charles Corbi\`ere and Leo Butsanets and Amaury Prat and Pierre Manceron View PDF HTML (experimental) Abstract:The rapid growth of medical imaging has fueled the development of Foundation Models (FMs) to reduce the growing, unsustainable workload on radiologists. While recent FMs have shown the power of large-scale pre-training to CT and MRI analysis, there remains significant room to optimize how these models learn from complex radiological volumes. Building upon the Curia framework, this work introduces Curia-2, which significantly improves the original pre-training strategy and representation quality to better capture the specificities of radiological data. The proposed methodology enables scaling the architecture up to billion-parameter Vision Transformers, marking a first for multi-modal CT and MRI FMs. Furthermore, we formalize the evaluation of these models by extending and restructuring CuriaBench into two distinct tracks: a 2D track tailored for slice-based ...