[2506.07177] Frame Guidance: Training-Free Guidance for Frame-Level Control in Video Diffusion Models
About this article
Abstract page for arXiv paper 2506.07177: Frame Guidance: Training-Free Guidance for Frame-Level Control in Video Diffusion Models
Computer Science > Computer Vision and Pattern Recognition arXiv:2506.07177 (cs) [Submitted on 8 Jun 2025 (v1), last revised 3 Mar 2026 (this version, v2)] Title:Frame Guidance: Training-Free Guidance for Frame-Level Control in Video Diffusion Models Authors:Sangwon Jang, Taekyung Ki, Jaehyeong Jo, Jaehong Yoon, Soo Ye Kim, Zhe Lin, Sung Ju Hwang View a PDF of the paper titled Frame Guidance: Training-Free Guidance for Frame-Level Control in Video Diffusion Models, by Sangwon Jang and 6 other authors View PDF HTML (experimental) Abstract:Advancements in diffusion models have significantly improved video quality, directing attention to fine-grained controllability. However, many existing methods depend on fine-tuning large-scale video models for specific tasks, which becomes increasingly impractical as model sizes continue to grow. In this work, we present Frame Guidance, a training-free guidance for controllable video generation based on frame-level signals, such as keyframes, style reference images, sketches, or depth maps. For practical training-free guidance, we propose a simple latent processing method that dramatically reduces memory usage, and apply a novel latent optimization strategy designed for globally coherent video generation. Frame Guidance enables effective control across diverse tasks, including keyframe guidance, stylization, and looping, without any training, compatible with any video models. Experimental results show that Frame Guidance can produce high-...