[2602.13906] Quantifying Normality: Convergence Rate to Gaussian Limit for Stochastic Approximation and Unadjusted OU Algorithm
Summary
This paper analyzes the convergence rate to a Gaussian limit for stochastic approximation methods, providing explicit non-asymptotic bounds on the accuracy of Gaussian approximations in finite time.
Why It Matters
Understanding the convergence rate of stochastic approximation methods is crucial for improving their accuracy in practical applications. This research offers valuable insights into the error dynamics and enhances the theoretical framework of stochastic processes, which is essential for fields like machine learning and statistics.
Key Takeaways
- Establishes explicit non-asymptotic bounds on the Wasserstein distance for stochastic approximation iterates.
- Provides tail bounds on the error of stochastic approximation iterates at any time.
- Connects the convergence rate of discrete Ornstein-Uhlenbeck processes to stochastic approximation methods.
- Adapts Stein's method for Gaussian approximation to a broader context.
- Enhances understanding of error dynamics in stochastic processes.
Statistics > Machine Learning arXiv:2602.13906 (stat) [Submitted on 14 Feb 2026] Title:Quantifying Normality: Convergence Rate to Gaussian Limit for Stochastic Approximation and Unadjusted OU Algorithm Authors:Shaan Ul Haque, Zedong Wang, Zixuan Zhang, Siva Theja Maguluri View a PDF of the paper titled Quantifying Normality: Convergence Rate to Gaussian Limit for Stochastic Approximation and Unadjusted OU Algorithm, by Shaan Ul Haque and 3 other authors View PDF HTML (experimental) Abstract:Stochastic approximation (SA) is a method for finding the root of an operator perturbed by noise. There is a rich literature establishing the asymptotic normality of rescaled SA iterates under fairly mild conditions. However, these asymptotic results do not quantify the accuracy of the Gaussian approximation in finite time. In this paper, we establish explicit non-asymptotic bounds on the Wasserstein distance between the distribution of the rescaled iterate at time k and the asymptotic Gaussian limit for various choices of step-sizes including constant and polynomially decaying. As an immediate consequence, we obtain tail bounds on the error of SA iterates at any time. We obtain the sharp rates by first studying the convergence rate of the discrete Ornstein-Uhlenbeck (O-U) process driven by general noise, whose stationary distribution is identical to the limiting Gaussian distribution of the rescaled SA iterates. We believe that this is of independent interest, given its connection to s...