[2603.29252] Scaling the Long Video Understanding of Multimodal Large Language Models via Visual Memory Mechanism
About this article
Abstract page for arXiv paper 2603.29252: Scaling the Long Video Understanding of Multimodal Large Language Models via Visual Memory Mechanism
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.29252 (cs) [Submitted on 31 Mar 2026] Title:Scaling the Long Video Understanding of Multimodal Large Language Models via Visual Memory Mechanism Authors:Tao Chen, Kun Zhang, Qiong Wu, Xiao Chen, Chao Chang, Xiaoshuai Sun, Yiyi Zhou, Rongrong Ji View a PDF of the paper titled Scaling the Long Video Understanding of Multimodal Large Language Models via Visual Memory Mechanism, by Tao Chen and 7 other authors View PDF HTML (experimental) Abstract:Long video understanding is a key challenge that plagues the advancement of \emph{Multimodal Large language Models} (MLLMs). In this paper, we study this problem from the perspective of visual memory mechanism, and proposed a novel and training-free approach, termed \emph{Flexible Memory} (\textbf{FlexMem}). In principle, FlexMem aims to mimic human behavior of video watching, \emph{i.e.}, continually watching video content and recalling the most relevant memory fragments to answer the question. In this way, FlexMem can help MLLMs achieve video understanding of infinite lengths, unlike previous methods that process all video information at once and have input upper-limit. Concretely, FlexMem first consider the visual KV caches as the memory sources, and realize the effective memory transfer and writing via a dual-pathway compression design. Afterwards, FlexMem also explores different memory reading strategies for the diverse video understanding tasks, including the...