[2601.14724] HERMES: KV Cache as Hierarchical Memory for Efficient Streaming Video Understanding

[2601.14724] HERMES: KV Cache as Hierarchical Memory for Efficient Streaming Video Understanding

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2601.14724: HERMES: KV Cache as Hierarchical Memory for Efficient Streaming Video Understanding

Computer Science > Computer Vision and Pattern Recognition arXiv:2601.14724 (cs) [Submitted on 21 Jan 2026 (v1), last revised 15 Apr 2026 (this version, v3)] Title:HERMES: KV Cache as Hierarchical Memory for Efficient Streaming Video Understanding Authors:Haowei Zhang, Shudong Yang, Jinlan Fu, See-Kiong Ng, Xipeng Qiu View a PDF of the paper titled HERMES: KV Cache as Hierarchical Memory for Efficient Streaming Video Understanding, by Haowei Zhang and 4 other authors View PDF Abstract:Recent advancements in Multimodal Large Language Models (MLLMs) have demonstrated significant improvement in offline video understanding. However, extending these capabilities to streaming video inputs, remains challenging, as existing models struggle to simultaneously maintain stable understanding performance, real-time responses, and low GPU memory overhead. To address this challenge, we propose HERMES, a novel training-free architecture for real-time and accurate understanding of video streams. Based on a mechanistic attention investigation, we conceptualize KV cache as a hierarchical memory framework that encapsulates video information across multiple granularities. During inference, HERMES reuses a compact KV cache, enabling efficient streaming understanding under resource constraints. Notably, HERMES requires no auxiliary computations upon the arrival of user queries, thereby guaranteeing real-time responses for continuous video stream interactions, which achieves 10$\times$ faster TTFT...

Originally published on April 17, 2026. Curated by AI News.

Related Articles

[2603.13683] Preconditioned Test-Time Adaptation for Out-of-Distribution Debiasing in Narrative Generation
Llms

[2603.13683] Preconditioned Test-Time Adaptation for Out-of-Distribution Debiasing in Narrative Generation

Abstract page for arXiv paper 2603.13683: Preconditioned Test-Time Adaptation for Out-of-Distribution Debiasing in Narrative Generation

arXiv - AI · 3 min ·
[2602.03295] POP: Prefill-Only Pruning for Efficient Large Model Inference
Llms

[2602.03295] POP: Prefill-Only Pruning for Efficient Large Model Inference

Abstract page for arXiv paper 2602.03295: POP: Prefill-Only Pruning for Efficient Large Model Inference

arXiv - AI · 4 min ·
[2601.15488] Multi-Persona Thinking for Bias Mitigation in Large Language Models
Llms

[2601.15488] Multi-Persona Thinking for Bias Mitigation in Large Language Models

Abstract page for arXiv paper 2601.15488: Multi-Persona Thinking for Bias Mitigation in Large Language Models

arXiv - AI · 3 min ·
[2601.10120] TopoDIM: One-shot Topology Generation of Diverse Interaction Modes for Multi-Agent Systems
Llms

[2601.10120] TopoDIM: One-shot Topology Generation of Diverse Interaction Modes for Multi-Agent Systems

Abstract page for arXiv paper 2601.10120: TopoDIM: One-shot Topology Generation of Diverse Interaction Modes for Multi-Agent Systems

arXiv - AI · 3 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime