[2603.25155] Photon: Speedup Volume Understanding with Efficient Multimodal Large Language Models
About this article
Abstract page for arXiv paper 2603.25155: Photon: Speedup Volume Understanding with Efficient Multimodal Large Language Models
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.25155 (cs) [Submitted on 26 Mar 2026] Title:Photon: Speedup Volume Understanding with Efficient Multimodal Large Language Models Authors:Chengyu Fang, Heng Guo, Zheng Jiang, Chunming He, Xiu Li, Minfeng Xu View a PDF of the paper titled Photon: Speedup Volume Understanding with Efficient Multimodal Large Language Models, by Chengyu Fang and 5 other authors View PDF HTML (experimental) Abstract:Multimodal large language models are promising for clinical visual question answering tasks, but scaling to 3D imaging is hindered by high computational costs. Prior methods often rely on 2D slices or fixed-length token compression, disrupting volumetric continuity and obscuring subtle findings. We present Photon, a framework that represents 3D medical volumes with token sequences of variable length. Photon introduces instruction-conditioned token scheduling and surrogate gradient propagation to adaptively reduce tokens during both training and inference, which lowers computational cost while mitigating the attention dilution caused by redundant tokens. It incorporates a custom backpropagation rule with gradient restoration to enable differentiable optimization despite discrete token drop. To stabilize token compression and ensure reliable use of visual evidence, Photon further applies regularization objectives that mitigate language-only bias and improve reliability. Experiments on diverse medical visual question ...