[2603.27960] Efficient Inference of Large Vision Language Models
About this article
Abstract page for arXiv paper 2603.27960: Efficient Inference of Large Vision Language Models
Computer Science > Machine Learning arXiv:2603.27960 (cs) [Submitted on 30 Mar 2026] Title:Efficient Inference of Large Vision Language Models Authors:Surendra Pathak View a PDF of the paper titled Efficient Inference of Large Vision Language Models, by Surendra Pathak View PDF HTML (experimental) Abstract:Although Large Vision Language Models (LVLMs) have demonstrated impressive multimodal reasoning capabilities, their scalability and deployment are constrained by massive computational requirements. In particular, the massive amount of visual tokens from high-resolution input data aggravates the situation due to the quadratic complexity of attention mechanisms. To address these issues, the research community has developed several optimization frameworks. This paper presents a comprehensive survey of the current state-of-the-art techniques for accelerating LVLM inference. We introduce a systematic taxonomy that categorizes existing optimization frameworks into four primary dimensions: visual token compression, memory management and serving, efficient architectural design, and advanced decoding strategies. Furthermore, we critically examine the limitations of these current methodologies and identify critical open problems to inspire future research directions in efficient multimodal systems. Comments: Subjects: Machine Learning (cs.LG); Computation and Language (cs.CL); Computer Vision and Pattern Recognition (cs.CV) Cite as: arXiv:2603.27960 [cs.LG] (or arXiv:2603.27960v...