[2512.01707] StreamGaze: Gaze-Guided Temporal Reasoning and Proactive Understanding in Streaming Videos
About this article
Abstract page for arXiv paper 2512.01707: StreamGaze: Gaze-Guided Temporal Reasoning and Proactive Understanding in Streaming Videos
Computer Science > Computer Vision and Pattern Recognition arXiv:2512.01707 (cs) [Submitted on 1 Dec 2025 (v1), last revised 27 Mar 2026 (this version, v2)] Title:StreamGaze: Gaze-Guided Temporal Reasoning and Proactive Understanding in Streaming Videos Authors:Daeun Lee, Subhojyoti Mukherjee, Branislav Kveton, Ryan A. Rossi, Viet Dac Lai, Seunghyun Yoon, Trung Bui, Franck Dernoncourt, Mohit Bansal View a PDF of the paper titled StreamGaze: Gaze-Guided Temporal Reasoning and Proactive Understanding in Streaming Videos, by Daeun Lee and 8 other authors View PDF HTML (experimental) Abstract:Streaming video understanding requires models not only to process temporally incoming frames, but also to anticipate user intention for realistic applications such as Augmented Reality (AR) glasses. While prior streaming benchmarks evaluate temporal reasoning, none measure whether Multimodal Large Language Models (MLLMs) can interpret or leverage human gaze signals within a streaming setting. To fill this gap, we introduce StreamGaze, the first benchmark designed to evaluate how effectively MLLMs utilize gaze for temporal and proactive reasoning in streaming videos. StreamGaze introduces gaze-guided past, present, and proactive tasks that comprehensively assess streaming video understanding. These tasks evaluate whether models can use real-time gaze signals to follow shifting attention and infer user intentions based only on past and currently observed frames. To build StreamGaze, we deve...