[2603.20397] KV Cache Optimization Strategies for Scalable and Efficient LLM Inference
About this article
Abstract page for arXiv paper 2603.20397: KV Cache Optimization Strategies for Scalable and Efficient LLM Inference
Computer Science > Machine Learning arXiv:2603.20397 (cs) [Submitted on 20 Mar 2026] Title:KV Cache Optimization Strategies for Scalable and Efficient LLM Inference Authors:Yichun Xu, Navjot K. Khaira, Tejinder Singh View a PDF of the paper titled KV Cache Optimization Strategies for Scalable and Efficient LLM Inference, by Yichun Xu and 2 other authors View PDF HTML (experimental) Abstract:The key-value (KV) cache is a foundational optimization in Transformer-based large language models (LLMs), eliminating redundant recomputation of past token representations during autoregressive generation. However, its memory footprint scales linearly with context length, imposing critical bottlenecks on GPU memory capacity, memory bandwidth, and inference throughput as production LLMs push context windows from thousands to millions of tokens. Efficient KV cache management has thus become a first-order challenge for scalable LLM deployment. This paper provides a systematic review of recent KV cache optimization techniques, organizing them into five principal directions: cache eviction, cache compression, hybrid memory solutions, novel attention mechanisms, and combination strategies. For each category we analyze the underlying mechanisms, deployment trade-offs, and empirical performance across memory reduction, throughput, and model accuracy metrics. We further map techniques to seven practical deployment scenarios, including long-context single requests, high-throughput datacenter ser...