[2603.20616] Beyond Token Eviction: Mixed-Dimension Budget Allocation for Efficient KV Cache Compression
About this article
Abstract page for arXiv paper 2603.20616: Beyond Token Eviction: Mixed-Dimension Budget Allocation for Efficient KV Cache Compression
Computer Science > Machine Learning arXiv:2603.20616 (cs) [Submitted on 21 Mar 2026] Title:Beyond Token Eviction: Mixed-Dimension Budget Allocation for Efficient KV Cache Compression Authors:Ruijie Miao, Zhiming Wang, Wang Li, Shiwei Wu, Shufan Liu, Yanbing Jiang, Tong Yang View a PDF of the paper titled Beyond Token Eviction: Mixed-Dimension Budget Allocation for Efficient KV Cache Compression, by Ruijie Miao and 6 other authors View PDF HTML (experimental) Abstract:Key-value (KV) caching is widely used to accelerate transformer inference, but its memory cost grows linearly with input length, limiting long-context deployment. Existing token eviction methods reduce memory by discarding less important tokens, which can be viewed as a coarse form of dimensionality reduction that assigns each token either zero or full dimension. We propose MixedDimKV, a mixed-dimension KV cache compression method that allocates dimensions to tokens at a more granular level, and MixedDimKV-H, which further integrates head-level importance information. Experiments on long-context benchmarks show that MixedDimKV outperforms prior KV cache compression methods that do not rely on head-level importance profiling. When equipped with the same head-level importance information, MixedDimKV-H consistently outperforms HeadKV. Notably, our approach achieves comparable performance to full attention on LongBench with only 6.25% of the KV cache. Furthermore, in the Needle-in-a-Haystack test, our solution mai...