[2602.03216] Token Sparse Attention: Efficient Long-Context Inference with Interleaved Token Selection

[2602.03216] Token Sparse Attention: Efficient Long-Context Inference with Interleaved Token Selection

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2602.03216: Token Sparse Attention: Efficient Long-Context Inference with Interleaved Token Selection

Computer Science > Computation and Language arXiv:2602.03216 (cs) [Submitted on 3 Feb 2026 (v1), last revised 30 Apr 2026 (this version, v2)] Title:Token Sparse Attention: Efficient Long-Context Inference with Interleaved Token Selection Authors:Dongwon Jo, Beomseok Kang, Jiwon Song, Jae-Joon Kim View a PDF of the paper titled Token Sparse Attention: Efficient Long-Context Inference with Interleaved Token Selection, by Dongwon Jo and 3 other authors View PDF HTML (experimental) Abstract:The quadratic complexity of attention remains the central bottleneck in long-context inference for large language models. Prior acceleration methods either sparsify the attention map with structured patterns or permanently evict tokens at specific layers, which can retain irrelevant tokens or rely on irreversible early decisions despite the layer-/head-wise dynamics of token importance. In this paper, we propose Token Sparse Attention, a lightweight and dynamic token-level sparsification mechanism that compresses per-head $Q$, $K$, $V$ to a reduced token set during attention and then decompresses the output back to the original sequence, enabling token information to be reconsidered in subsequent layers. Furthermore, Token Sparse Attention exposes a new design point at the intersection of token selection and sparse attention. Our approach is fully compatible with dense attention implementations, including Flash Attention, and can be seamlessly composed with existing sparse attention kernels...

Originally published on May 04, 2026. Curated by AI News.

Related Articles

Llms

Excellent discussion about LLM scaling [D]

I came across an excellent in depth discussion of memory and compute scaling analysis for LLMs. One takeaway is that running LLMs locally...

Reddit - Machine Learning · 1 min ·
[2601.21214] Scaling Reasoning Hop Exposes Weaknesses: Demystifying and Improving Hop Generalization in Large Language Models
Llms

[2601.21214] Scaling Reasoning Hop Exposes Weaknesses: Demystifying and Improving Hop Generalization in Large Language Models

Abstract page for arXiv paper 2601.21214: Scaling Reasoning Hop Exposes Weaknesses: Demystifying and Improving Hop Generalization in Larg...

arXiv - Machine Learning · 4 min ·
[2510.23557] Minimizing Human Intervention in Online Classification
Llms

[2510.23557] Minimizing Human Intervention in Online Classification

Abstract page for arXiv paper 2510.23557: Minimizing Human Intervention in Online Classification

arXiv - Machine Learning · 4 min ·
[2510.18900] Foundation Models for Discovery and Exploration in Chemical Space
Llms

[2510.18900] Foundation Models for Discovery and Exploration in Chemical Space

Abstract page for arXiv paper 2510.18900: Foundation Models for Discovery and Exploration in Chemical Space

arXiv - Machine Learning · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime