[2506.02634] KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider
Summary
This paper characterizes and optimizes KVCache, a caching mechanism for large language model (LLM) serving at a major cloud provider, highlighting workload patterns and proposing a new eviction policy for improved performance.
Why It Matters
Understanding KVCache's performance and optimization is crucial for cloud providers serving LLMs, as it directly impacts throughput and latency. This research provides insights into real-world caching behavior, which can enhance system efficiency and resource management.
Key Takeaways
- KVCache significantly improves serving throughput and latency for LLMs.
- Workload patterns show that cache reuse is skewed and varies by request type.
- A workload-aware cache eviction policy can enhance performance under limited cache capacity.
- The required cache size for optimal performance is moderate, contrary to previous assumptions.
- This study fills a gap in understanding real-world caching compared to synthetic workload studies.
Computer Science > Distributed, Parallel, and Cluster Computing arXiv:2506.02634 (cs) [Submitted on 3 Jun 2025 (v1), last revised 14 Feb 2026 (this version, v5)] Title:KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider Authors:Jiahao Wang, Jinbo Han, Xingda Wei, Sijie Shen, Dingyan Zhang, Chenguang Fang, Rong Chen, Wenyuan Yu, Haibo Chen View a PDF of the paper titled KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider, by Jiahao Wang and 8 other authors View PDF HTML (experimental) Abstract:Serving large language models (LLMs) is important for cloud providers, and caching intermediate results (KV\$) after processing each request substantially improves serving throughput and latency. However, there is limited understanding of how LLM serving benefits from KV\$ caching, where system design decisions like cache eviction policies are highly workload-dependent. In this paper, we present the first systematic characterization of the KV\$ workload patterns from one of the leading LLM service providers. We draw observations that were not covered by previous studies focusing on synthetic workloads, including: KV\$ reuses are skewed across requests, where reuses between single-turn requests are equally important as multi-turn requests; the reuse time and probability are diverse considering all requests, but for a specific request category, the pattern tends to be predictable; and the overall c...