[2602.13960] Steady-State Behavior of Constant-Stepsize Stochastic Approximation: Gaussian Approximation and Tail Bounds
Summary
This paper explores the steady-state behavior of constant-stepsize stochastic approximation, providing explicit non-asymptotic error bounds for Gaussian approximations and tail probabilities.
Why It Matters
Understanding the steady-state behavior of stochastic approximation methods is crucial for improving the efficiency of machine learning algorithms. This research offers concrete bounds that can enhance the reliability of these methods, particularly in stochastic gradient descent and other applications, making it relevant for both theoretical and practical advancements in machine learning.
Key Takeaways
- Establishes explicit error bounds for Gaussian approximations in constant-stepsize stochastic approximation.
- Covers both i.i.d. and Markovian noise models, enhancing applicability.
- Provides dimension- and stepsize-dependent bounds in Wasserstein distance.
- Derives non-uniform Berry-Esseen-type tail bounds for steady-state probabilities.
- Identifies a non-Gaussian limiting law under specific scaling conditions.
Computer Science > Machine Learning arXiv:2602.13960 (cs) [Submitted on 15 Feb 2026] Title:Steady-State Behavior of Constant-Stepsize Stochastic Approximation: Gaussian Approximation and Tail Bounds Authors:Zedong Wang, Yuyang Wang, Ijay Narang, Felix Wang, Yuzhou Wang, Siva Theja Maguluri View a PDF of the paper titled Steady-State Behavior of Constant-Stepsize Stochastic Approximation: Gaussian Approximation and Tail Bounds, by Zedong Wang and 5 other authors View PDF Abstract:Constant-stepsize stochastic approximation (SA) is widely used in learning for computational efficiency. For a fixed stepsize, the iterates typically admit a stationary distribution that is rarely tractable. Prior work shows that as the stepsize $\alpha \downarrow 0$, the centered-and-scaled steady state converges weakly to a Gaussian random vector. However, for fixed $\alpha$, this weak convergence offers no usable error bound for approximating the steady-state by its Gaussian limit. This paper provides explicit, non-asymptotic error bounds for fixed $\alpha$. We first prove general-purpose theorems that bound the Wasserstein distance between the centered-scaled steady state and an appropriate Gaussian distribution, under regularity conditions for drift and moment conditions for noise. To ensure broad applicability, we cover both i.i.d. and Markovian noise models. We then instantiate these theorems for three representative SA settings: (1) stochastic gradient descent (SGD) for smooth strongly conv...