[2603.20218] An experimental study of KV cache reuse strategies in chunk-level caching systems
About this article
Abstract page for arXiv paper 2603.20218: An experimental study of KV cache reuse strategies in chunk-level caching systems
Computer Science > Computation and Language arXiv:2603.20218 (cs) [Submitted on 3 Mar 2026] Title:An experimental study of KV cache reuse strategies in chunk-level caching systems Authors:Samuel Cestola, Tianxiang Xia, Zheng Weiyan, Zheng Pengfei, Diego Didona View a PDF of the paper titled An experimental study of KV cache reuse strategies in chunk-level caching systems, by Samuel Cestola and 4 other authors View PDF HTML (experimental) Abstract:Retrieval-augmented generation improves large language models' accuracy by adding relevant retrieved text to the prompt. Chunk level caching (CLC) accelerates inference by precomputing KV caches for these retrieved chunks and reusing them. However, these caches miss cross-attention dependencies between chunks, which can reduce output quality. Several methods try to improve CLC accuracy using different techniques. We make two main contributions. First, we show that existing CLC approaches have fundamental limitations that limit their accuracy or their applicability. We back this conclusion with an extensive CLC system experimental evaluation. Second, we observe that existing CLC techniques are complementary. We leverage this insight to propose a new CLC design that carefully combines them and achieves better accuracy. Subjects: Computation and Language (cs.CL); Machine Learning (cs.LG) ACM classes: I.2.7 Cite as: arXiv:2603.20218 [cs.CL] (or arXiv:2603.20218v1 [cs.CL] for this version) https://doi.org/10.48550/arXiv.2603.20218 ...