[2603.27593] STRIDE: When to Speak Meets Sequence Denoising for Streaming Video Understanding
About this article
Abstract page for arXiv paper 2603.27593: STRIDE: When to Speak Meets Sequence Denoising for Streaming Video Understanding
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.27593 (cs) [Submitted on 29 Mar 2026] Title:STRIDE: When to Speak Meets Sequence Denoising for Streaming Video Understanding Authors:Junho Kim, Hosu Lee, James M. Rehg, Minsu Kim, Yong Man Ro View a PDF of the paper titled STRIDE: When to Speak Meets Sequence Denoising for Streaming Video Understanding, by Junho Kim and 4 other authors View PDF HTML (experimental) Abstract:Recent progress in video large language models (Video-LLMs) has enabled strong offline reasoning over long and complex videos. However, real-world deployments increasingly require streaming perception and proactive interaction, where video frames arrive online and the system must decide not only what to respond, but also when to respond. In this work, we revisit proactive activation in streaming video as a structured sequence modeling problem, motivated by the observation that temporal transitions in streaming video naturally form span-structured activation patterns. To capture this span-level structure, we model activation signals jointly over a sliding temporal window and update them iteratively as new frames arrive. We propose STRIDE (Structured Temporal Refinement with Iterative DEnoising), which employs a lightweight masked diffusion module at the activation interface to jointly predict and progressively refine activation signals across the window. Extensive experiments on diverse streaming benchmarks and downstream models demonst...