[2602.04361] SparVAR: Exploring Sparsity in Visual AutoRegressive Modeling for Training-Free Acceleration
About this article
Abstract page for arXiv paper 2602.04361: SparVAR: Exploring Sparsity in Visual AutoRegressive Modeling for Training-Free Acceleration
Computer Science > Computer Vision and Pattern Recognition arXiv:2602.04361 (cs) [Submitted on 4 Feb 2026 (v1), last revised 29 Mar 2026 (this version, v2)] Title:SparVAR: Exploring Sparsity in Visual AutoRegressive Modeling for Training-Free Acceleration Authors:Zekun Li, Ning Wang, Tongxin Bai, Changwang Mei, Peisong Wang, Shuang Qiu, Jian Cheng View a PDF of the paper titled SparVAR: Exploring Sparsity in Visual AutoRegressive Modeling for Training-Free Acceleration, by Zekun Li and 6 other authors View PDF HTML (experimental) Abstract:Visual AutoRegressive (VAR) modeling has garnered significant attention for its innovative next-scale prediction paradigm. However, mainstream VAR paradigms attend to all tokens across historical scales at each autoregressive step. As the next scale resolution grows, the computational complexity of attention increases quartically with resolution, causing substantial latency. Prior accelerations often skip high-resolution scales, which speeds up inference but discards high-frequency details and harms image quality. To address these problems, we present \textbf{SparVAR}, a training-free acceleration framework that exploits three properties of VAR attention: \textbf{(i) strong attention sinks}, \textbf{(ii) cross-scale activation similarity}, and \textbf{(iii) pronounced locality}. Specifically, we dynamically predict the sparse attention pattern of later high-resolution scales from a sparse decision scale, and construct scale self-similar s...