[2505.21545] Corruption-Aware Training of Latent Video Diffusion Models for Robust Text-to-Video Generation
About this article
Abstract page for arXiv paper 2505.21545: Corruption-Aware Training of Latent Video Diffusion Models for Robust Text-to-Video Generation
Computer Science > Computer Vision and Pattern Recognition arXiv:2505.21545 (cs) [Submitted on 24 May 2025 (v1), last revised 26 Mar 2026 (this version, v3)] Title:Corruption-Aware Training of Latent Video Diffusion Models for Robust Text-to-Video Generation Authors:Chika Maduabuchi, Hao Chen, Yujin Han, Jindong Wang View a PDF of the paper titled Corruption-Aware Training of Latent Video Diffusion Models for Robust Text-to-Video Generation, by Chika Maduabuchi and 3 other authors View PDF HTML (experimental) Abstract:Latent Video Diffusion Models (LVDMs) have achieved state-of-the-art generative quality for image and video generation; however, they remain brittle under noisy conditioning, where small perturbations in text or multimodal embeddings can cascade over timesteps and cause semantic drift. Existing corruption strategies from image diffusion (Gaussian, Uniform) fail in video settings because static noise disrupts temporal fidelity. In this paper, we propose CAT-LVDM, a corruption-aware training framework with structured, data-aligned noise injection tailored for video diffusion. Our two operators, Batch-Centered Noise Injection (BCNI) and Spectrum-Aware Contextual Noise (SACN), align perturbations with batch semantics or spectral dynamics to preserve coherence. CAT-LVDM yields substantial gains: BCNI reduces FVD by 31.9 percent on WebVid-2M, MSR-VTT, and MSVD, while SACN improves UCF-101 by 12.3 percent, outperforming Gaussian, Uniform, and even large diffusion ba...