[2601.18157] Agentic Very Long Video Understanding
About this article
Abstract page for arXiv paper 2601.18157: Agentic Very Long Video Understanding
Computer Science > Computer Vision and Pattern Recognition arXiv:2601.18157 (cs) [Submitted on 26 Jan 2026 (v1), last revised 5 Mar 2026 (this version, v2)] Title:Agentic Very Long Video Understanding Authors:Aniket Rege, Arka Sadhu, Yuliang Li, Kejie Li, Ramya Korlakai Vinayak, Yuning Chai, Yong Jae Lee, Hyo Jin Kim View a PDF of the paper titled Agentic Very Long Video Understanding, by Aniket Rege and 7 other authors View PDF HTML (experimental) Abstract:The advent of always-on personal AI assistants, enabled by all-day wearable devices such as smart glasses, demands a new level of contextual understanding, one that goes beyond short, isolated events to encompass the continuous, longitudinal stream of egocentric video. Achieving this vision requires advances in long-horizon video understanding, where systems must interpret and recall visual and audio information spanning days or even weeks. Existing methods, including large language models and retrieval-augmented generation, are constrained by limited context windows and lack the ability to perform compositional, multi-hop reasoning over very long video streams. In this work, we address these challenges through EGAgent, an enhanced agentic framework centered on entity scene graphs, which represent people, places, objects, and their relationships over time. Our system equips a planning agent with tools for structured search and reasoning over these graphs, as well as hybrid visual and audio search capabilities, enabling ...