[2603.23308] Curriculum-Driven 3D CT Report Generation via Language-Free Visual Grafting and Zone-Constrained Compression
About this article
Abstract page for arXiv paper 2603.23308: Curriculum-Driven 3D CT Report Generation via Language-Free Visual Grafting and Zone-Constrained Compression
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.23308 (cs) [Submitted on 24 Mar 2026] Title:Curriculum-Driven 3D CT Report Generation via Language-Free Visual Grafting and Zone-Constrained Compression Authors:V. K. Cody Bumgardner, Mitchell A. Klusty, Mahmut S. Gokmen, Evan W. Damron View a PDF of the paper titled Curriculum-Driven 3D CT Report Generation via Language-Free Visual Grafting and Zone-Constrained Compression, by V. K. Cody Bumgardner and 3 other authors View PDF HTML (experimental) Abstract:Automated radiology report generation from 3D computed tomography (CT) volumes is challenging due to extreme sequence lengths, severe class imbalance, and the tendency of large language models (LLMs) to ignore visual tokens in favor of linguistic priors. We present Ker-VLJEPA-3B, a four-phase curriculum learning framework for free-text report generation from thoracic CT volumes. A phased training curriculum progressively adapts a Llama 3.2 3B decoder to ground its output in visual features from a frozen, self-supervised encoder. Our visual backbone (LeJEPA ViT-Large) is trained via self-supervised joint-embedding prediction on unlabeled CTs, without text supervision. Unlike contrastive models (CLIP, BiomedCLIP), this language-free backbone yields modality-pure representations. Vision-language alignment is deferred to the curriculum's bridge and generation phases. This modality-agnostic design can integrate any self-supervised encoder into an LLM withou...