[2503.03206] An Analytical Theory of Spectral Bias in the Learning Dynamics of Diffusion Models
About this article
Abstract page for arXiv paper 2503.03206: An Analytical Theory of Spectral Bias in the Learning Dynamics of Diffusion Models
Computer Science > Machine Learning arXiv:2503.03206 (cs) [Submitted on 5 Mar 2025 (v1), last revised 5 Apr 2026 (this version, v3)] Title:An Analytical Theory of Spectral Bias in the Learning Dynamics of Diffusion Models Authors:Binxu Wang, Cengiz Pehlevan View a PDF of the paper titled An Analytical Theory of Spectral Bias in the Learning Dynamics of Diffusion Models, by Binxu Wang and 1 other authors View PDF HTML (experimental) Abstract:We develop an analytical framework for understanding how the generated distribution evolves during diffusion model training. Leveraging a Gaussian-equivalence principle, we solve the full-batch gradient-flow dynamics of linear and convolutional denoisers and integrate the resulting probability-flow ODE, yielding analytic expressions for the generated distribution. The theory exposes a universal inverse-variance spectral law: the time for an eigen- or Fourier mode to match its target variance scales as $\tau\propto\lambda^{-1}$, so high-variance (coarse) structure is mastered orders of magnitude sooner than low-variance (fine) detail. Extending the analysis to deep linear networks and circulant full-width convolutions shows that weight sharing merely multiplies learning rates -- accelerating but not eliminating the bias -- whereas local convolution introduces a qualitatively different bias. Experiments on Gaussian and natural-image datasets confirm the spectral law persists in deep MLP-based UNet. Convolutional U-Nets, however, display r...