[2602.13515] SpargeAttention2: Trainable Sparse Attention via Hybrid Top-k+Top-p Masking and Distillation Fine-Tuning

[2602.13515] SpargeAttention2: Trainable Sparse Attention via Hybrid Top-k+Top-p Masking and Distillation Fine-Tuning

arXiv - Machine Learning 4 min read Article

Summary

The paper presents SpargeAttention2, a novel trainable sparse attention method that enhances the efficiency of diffusion models by combining Top-k and Top-p masking techniques, achieving high sparsity and speed without compromising generation quality.

Why It Matters

As machine learning models grow in complexity, optimizing their efficiency is crucial. SpargeAttention2 addresses the limitations of existing sparse attention methods, offering a significant advancement in model performance, which is vital for applications in computer vision and beyond.

Key Takeaways

  • SpargeAttention2 combines Top-k and Top-p masking for improved performance.
  • The method achieves 95% attention sparsity with a 16.2x speedup.
  • Trainable sparse attention can outperform traditional training-free methods.
  • The approach includes a distillation-inspired fine-tuning objective.
  • Experiments demonstrate maintained generation quality despite high sparsity.

Computer Science > Computer Vision and Pattern Recognition arXiv:2602.13515 (cs) [Submitted on 13 Feb 2026] Title:SpargeAttention2: Trainable Sparse Attention via Hybrid Top-k+Top-p Masking and Distillation Fine-Tuning Authors:Jintao Zhang, Kai Jiang, Chendong Xiang, Weiqi Feng, Yuezhou Hu, Haocheng Xi, Jianfei Chen, Jun Zhu View a PDF of the paper titled SpargeAttention2: Trainable Sparse Attention via Hybrid Top-k+Top-p Masking and Distillation Fine-Tuning, by Jintao Zhang and 7 other authors View PDF HTML (experimental) Abstract:Many training-free sparse attention methods are effective for accelerating diffusion models. Recently, several works suggest that making sparse attention trainable can further increase sparsity while preserving generation quality. We study three key questions: (1) when do the two common masking rules, i.e., Top-k and Top-p, fail, and how can we avoid these failures? (2) why can trainable sparse attention reach higher sparsity than training-free methods? (3) what are the limitations of fine-tuning sparse attention using the diffusion loss, and how can we address them? Based on this analysis, we propose SpargeAttention2, a trainable sparse attention method that achieves high sparsity without degrading generation quality. SpargeAttention2 includes (i) a hybrid masking rule that combines Top-k and Top-p for more robust masking at high sparsity, (ii) an efficient trainable sparse attention implementation, and (iii) a distillation-inspired fine-tuning...

Related Articles

Machine Learning

How well do you understand how AI/deep learning works?

Specifically, how AI are programmed, trained, and how they perform their functions. I’ll be asking this in different subs to see if/how t...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

a fun survey to look at how consumers perceive the use of AI in fashion brand marketing. (all ages, all genders)

Hi r/artificial ! I'm posting on behalf of a friend who is conducting academic research for their dissertation. The survey looks at how c...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

I Built a Functional Cognitive Engine

Aura: https://github.com/youngbryan97/aura Aura is not a chatbot with personality prompts. It is a complete cognitive architecture — 60+ ...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

[D] USQL Joins Were Cool, But Now I Want to Join the GenAI Party

Hi Experts, I have 1.5 years of experience in Data Engineering, and now I want to start learning AI, ML, and Generative AI. I already hav...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime