[2603.02447] Spectral Regularization for Diffusion Models
About this article
Abstract page for arXiv paper 2603.02447: Spectral Regularization for Diffusion Models
Computer Science > Machine Learning arXiv:2603.02447 (cs) [Submitted on 2 Mar 2026] Title:Spectral Regularization for Diffusion Models Authors:Satish Chandran, Nicolas Roque dos Santos, Yunshu Wu, Greg Ver Steeg, Evangelos Papalexakis View a PDF of the paper titled Spectral Regularization for Diffusion Models, by Satish Chandran and 4 other authors View PDF HTML (experimental) Abstract:Diffusion models are typically trained using pointwise reconstruction objectives that are agnostic to the spectral and multi-scale structure of natural signals. We propose a loss-level spectral regularization framework that augments standard diffusion training with differentiable Fourier- and wavelet-domain losses, without modifying the diffusion process, model architecture, or sampling procedure. The proposed regularizers act as soft inductive biases that encourage appropriate frequency balance and coherent multi-scale structure in generated samples. Our approach is compatible with DDPM, DDIM, and EDM formulations and introduces negligible computational overhead. Experiments on image and audio generation demonstrate consistent improvements in sample quality, with the largest gains observed on higher-resolution, unconditional datasets where fine-scale structure is most challenging to model. Subjects: Machine Learning (cs.LG) Cite as: arXiv:2603.02447 [cs.LG] (or arXiv:2603.02447v1 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2603.02447 Focus to learn more arXiv-issued DOI via...