[2211.01512] Convergence of the Inexact Langevin Algorithm in KL Divergence with Application to Score-based Generative Models
About this article
Abstract page for arXiv paper 2211.01512: Convergence of the Inexact Langevin Algorithm in KL Divergence with Application to Score-based Generative Models
Computer Science > Machine Learning arXiv:2211.01512 (cs) [Submitted on 2 Nov 2022 (v1), last revised 28 Mar 2026 (this version, v3)] Title:Convergence of the Inexact Langevin Algorithm in KL Divergence with Application to Score-based Generative Models Authors:Kaylee Yingxi Yang, Andre Wibisono View a PDF of the paper titled Convergence of the Inexact Langevin Algorithm in KL Divergence with Application to Score-based Generative Models, by Kaylee Yingxi Yang and 1 other authors View PDF HTML (experimental) Abstract:Motivated by the increasingly popular Score-based Generative Modeling (SGM), we study the Inexact Langevin Dynamics (ILD) and Inexact Langevin Algorithm (ILA) where a score function estimate is used in place of the exact score. We establish {\em stable} biased convergence guarantees in terms of the Kullback-Leibler (KL) divergence. To achieve these guarantees, we impose two key assumptions: 1) the target distribution satisfies the log-Sobolev inequality, and 2) the error of score estimator exhibits a sub-Gaussian tail, referred to as Moment Generating Function (MGF) error assumption. Under the stronger $L^\infty$ score error assumption, we obtain a stable convergence bound in Rényi divergence. We also generalize the proof technique to SGM, and derive a stable convergence bound in KL divergence. In addition, we explore the question of how to obtain a provably accurate score estimator. We demonstrate that a simple estimator based on kernel density estimation fulfi...