[2502.07861] Streaming Attention Approximation via Discrepancy Theory
About this article
Abstract page for arXiv paper 2502.07861: Streaming Attention Approximation via Discrepancy Theory
Computer Science > Machine Learning arXiv:2502.07861 (cs) [Submitted on 11 Feb 2025 (v1), last revised 24 Mar 2026 (this version, v3)] Title:Streaming Attention Approximation via Discrepancy Theory Authors:Ekaterina Kochetkova, Kshiteej Sheth, Insu Han, Amir Zandieh, Michael Kapralov View a PDF of the paper titled Streaming Attention Approximation via Discrepancy Theory, by Ekaterina Kochetkova and 4 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) have achieved impressive success, but their high memory requirements present challenges for long-context token generation. In this paper we study the streaming complexity of attention approximation, a key computational primitive underlying token generation. Our main contribution is BalanceKV, a streaming algorithm for $\epsilon$-approximating attention computations based on geometric process for selecting a balanced collection of Key and Value tokens as per Banaszczyk's vector balancing theory. We complement our algorithm with space lower bounds for streaming attention computation. Besides strong theoretical guarantees, BalanceKV exhibits empirically validated performance improvements over existing methods, both for attention approximation and end-to-end performance on various long context benchmarks. Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Data Structures and Algorithms (cs.DS) Cite as: arXiv:2502.07861 [cs.LG] (or arXiv:2502.07861v3 [cs.LG] for this version) http...