[2602.03216] Token Sparse Attention: Efficient Long-Context Inference with Interleaved Token Selection
About this article
Abstract page for arXiv paper 2602.03216: Token Sparse Attention: Efficient Long-Context Inference with Interleaved Token Selection
Computer Science > Computation and Language arXiv:2602.03216 (cs) [Submitted on 3 Feb 2026 (v1), last revised 30 Apr 2026 (this version, v2)] Title:Token Sparse Attention: Efficient Long-Context Inference with Interleaved Token Selection Authors:Dongwon Jo, Beomseok Kang, Jiwon Song, Jae-Joon Kim View a PDF of the paper titled Token Sparse Attention: Efficient Long-Context Inference with Interleaved Token Selection, by Dongwon Jo and 3 other authors View PDF HTML (experimental) Abstract:The quadratic complexity of attention remains the central bottleneck in long-context inference for large language models. Prior acceleration methods either sparsify the attention map with structured patterns or permanently evict tokens at specific layers, which can retain irrelevant tokens or rely on irreversible early decisions despite the layer-/head-wise dynamics of token importance. In this paper, we propose Token Sparse Attention, a lightweight and dynamic token-level sparsification mechanism that compresses per-head $Q$, $K$, $V$ to a reduced token set during attention and then decompresses the output back to the original sequence, enabling token information to be reconsidered in subsequent layers. Furthermore, Token Sparse Attention exposes a new design point at the intersection of token selection and sparse attention. Our approach is fully compatible with dense attention implementations, including Flash Attention, and can be seamlessly composed with existing sparse attention kernels...