[2602.14381] Adapting VACE for Real-Time Autoregressive Video Diffusion
Summary
This article presents an adaptation of VACE for real-time autoregressive video generation, enhancing video control while addressing latency and fidelity challenges.
Why It Matters
The adaptation of VACE for real-time applications is significant as it enables efficient video generation in streaming contexts, which is increasingly relevant in AI-driven media production. This work addresses key limitations of existing models, making it a valuable contribution to the field of computer vision and AI.
Key Takeaways
- VACE adaptation allows for real-time autoregressive video generation.
- The model preserves fixed chunk sizes and KV caching essential for streaming.
- Latency overhead is minimal at 20-30%, with negligible VRAM costs.
- Fidelity of reference-to-video is compromised due to causal attention constraints.
- A reference implementation is available for practical application.
Computer Science > Computer Vision and Pattern Recognition arXiv:2602.14381 (cs) [Submitted on 16 Feb 2026] Title:Adapting VACE for Real-Time Autoregressive Video Diffusion Authors:Ryan Fosdick (Daydream) View a PDF of the paper titled Adapting VACE for Real-Time Autoregressive Video Diffusion, by Ryan Fosdick (Daydream) View PDF HTML (experimental) Abstract:We describe an adaptation of VACE (Video All-in-one Creation and Editing) for real-time autoregressive video generation. VACE provides unified video control (reference guidance, structural conditioning, inpainting, and temporal extension) but assumes bidirectional attention over full sequences, making it incompatible with streaming pipelines that require fixed chunk sizes and causal attention. The key modification moves reference frames from the diffusion latent space into a parallel conditioning pathway, preserving the fixed chunk sizes and KV caching that autoregressive models require. This adaptation reuses existing pretrained VACE weights without additional training. Across 1.3B and 14B model scales, VACE adds 20-30% latency overhead for structural control and inpainting, with negligible VRAM cost relative to the base model. Reference-to-video fidelity is severely degraded compared to batch VACE due to causal attention constraints. A reference implementation is available at this https URL. Comments: Subjects: Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI) Cite as: arXiv:2602.14381 ...