[2510.08409] Optimal Stopping in Latent Diffusion Models
About this article
Abstract page for arXiv paper 2510.08409: Optimal Stopping in Latent Diffusion Models
Statistics > Machine Learning arXiv:2510.08409 (stat) [Submitted on 9 Oct 2025 (v1), last revised 2 Mar 2026 (this version, v2)] Title:Optimal Stopping in Latent Diffusion Models Authors:Yu-Han Wu, Quentin Berthet, Gérard Biau, Claire Boyer, Romuald Elie, Pierre Marion View a PDF of the paper titled Optimal Stopping in Latent Diffusion Models, by Yu-Han Wu and 4 other authors View PDF HTML (experimental) Abstract:We identify and analyze a surprising phenomenon of Latent Diffusion Models (LDMs) where the final steps of the diffusion can degrade sample quality. In contrast to conventional arguments that justify early stopping for numerical stability, this phenomenon is intrinsic to the dimensionality reduction in LDMs. We provide a principled explanation by analyzing the interaction between latent dimension and stopping time. Under a Gaussian framework with linear autoencoders, we characterize the conditions under which early stopping is needed to minimize the distance between generated and target distributions. More precisely, we show that lower-dimensional representations benefit from earlier termination, whereas higher-dimensional latent spaces require later stopping time. We further establish that the latent dimension interplays with other hyperparameters of the problem such as constraints in the parameters of score matching. Experiments on synthetic and real datasets illustrate these properties, underlining that early stopping can improve generative quality. Together, o...