[2605.06924] A$^2$RD: Agentic Autoregressive Diffusion for Long Video Consistency
About this article
Abstract page for arXiv paper 2605.06924: A$^2$RD: Agentic Autoregressive Diffusion for Long Video Consistency
Computer Science > Computer Vision and Pattern Recognition arXiv:2605.06924 (cs) [Submitted on 7 May 2026] Title:A$^2$RD: Agentic Autoregressive Diffusion for Long Video Consistency Authors:Do Xuan Long, Yale Song, Min-Yen Kan, Tomas Pfister, Long T. Le View a PDF of the paper titled A$^2$RD: Agentic Autoregressive Diffusion for Long Video Consistency, by Do Xuan Long and 4 other authors View PDF HTML (experimental) Abstract:Synthesizing consistent and coherent long video remains a fundamental challenge. Existing methods suffer from semantic drift and narrative collapse over long horizons. We present A$^2$RD, an Agentic Auto-Regressive Diffusion architecture that decouples creative synthesis from consistency enforcement. A$^2$RD formulates long video synthesis as a closed-loop process that synthesizes and self-improves video segment-by-segment through a Retrieve--Synthesize--Refine--Update cycle. It comprises three core components: (i) Multimodal Video Memory that tracks video progression across modalities; (ii) Adaptive Segment Generation that switches among generation modes for natural progression and visual consistency; and (iii) Hierarchical Test-Time Self-Improvement that self-improves each segment at frame and video levels to prevent error propagation. We further introduce LVBench-C, a challenging benchmark with non-linear entity and environment transitions to stress-test long-horizon consistency. Across public and LVBench-C benchmarks spanning one- to ten-minute vid...