[2602.22603] SideQuest: Model-Driven KV Cache Management for Long-Horizon Agentic Reasoning

[2602.22603] SideQuest: Model-Driven KV Cache Management for Long-Horizon Agentic Reasoning

arXiv - Machine Learning 3 min read Article

Summary

The paper presents SideQuest, a novel model-driven approach for managing KV cache in long-horizon reasoning tasks, achieving significant token usage reduction with minimal accuracy loss.

Why It Matters

As AI models increasingly handle complex, long-running tasks, efficient memory management becomes crucial. SideQuest addresses the limitations of existing KV cache techniques, enhancing the performance of large language models in multi-step reasoning scenarios, which is vital for applications in AI-driven research and decision-making.

Key Takeaways

  • SideQuest reduces peak token usage by up to 65% in long-horizon tasks.
  • The approach leverages the Large Reasoning Model for effective KV cache compression.
  • Minimal degradation in accuracy is observed compared to heuristic methods.
  • Addresses the challenge of managing memory in multi-hop reasoning tasks.
  • Demonstrates the potential for improved performance in AI applications.

Computer Science > Artificial Intelligence arXiv:2602.22603 (cs) [Submitted on 26 Feb 2026] Title:SideQuest: Model-Driven KV Cache Management for Long-Horizon Agentic Reasoning Authors:Sanjay Kariyappa, G. Edward Suh View a PDF of the paper titled SideQuest: Model-Driven KV Cache Management for Long-Horizon Agentic Reasoning, by Sanjay Kariyappa and G. Edward Suh View PDF HTML (experimental) Abstract:Long-running agentic tasks, such as deep research, require multi-hop reasoning over information distributed across multiple webpages and documents. In such tasks, the LLM context is dominated by tokens from external retrieval, causing memory usage to grow rapidly and limiting decode performance. While several KV cache compression techniques exist for long-context inputs, we find that existing heuristics fail to support multi-step reasoning models effectively. We address this challenge with SideQuest -- a novel approach that leverages the Large Reasoning Model (LRM) itself to perform KV cache compression by reasoning about the usefulness of tokens in its context. To prevent the tokens associated with this management process from polluting the model's memory, we frame KV cache compression as an auxiliary task executed in parallel to the main reasoning task. Our evaluations, using a model trained with just 215 samples, show that SideQuest reduces peak token usage by up to 65% on agentic tasks with minimal degradation in accuracy, outperforming heuristic-based KV cache compression...

Related Articles

Llms

Have Companies Began Adopting Claude Co-Work at an Enterprise Level?

Hi Guys, My company is considering purchasing the Claude Enterprise plan. The main two constraints are: - Being able to block usage of Cl...

Reddit - Artificial Intelligence · 1 min ·
Llms

What I learned about multi-agent coordination running 9 specialized Claude agents

I've been experimenting with multi-agent AI systems and ended up building something more ambitious than I originally planned: a fully ope...

Reddit - Artificial Intelligence · 1 min ·
Llms

[D] The problem with comparing AI memory system benchmarks — different evaluation methods make scores meaningless

I've been reviewing how various AI memory systems evaluate their performance and noticed a fundamental issue with cross-system comparison...

Reddit - Machine Learning · 1 min ·
Shifting to AI model customization is an architectural imperative | MIT Technology Review
Llms

Shifting to AI model customization is an architectural imperative | MIT Technology Review

In the early days of large language models (LLMs), we grew accustomed to massive 10x jumps in reasoning and coding capability with every ...

MIT Technology Review · 6 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime