[2602.13960] Steady-State Behavior of Constant-Stepsize Stochastic Approximation: Gaussian Approximation and Tail Bounds

[2602.13960] Steady-State Behavior of Constant-Stepsize Stochastic Approximation: Gaussian Approximation and Tail Bounds

arXiv - Machine Learning 4 min read Article

Summary

This paper explores the steady-state behavior of constant-stepsize stochastic approximation, providing explicit non-asymptotic error bounds for Gaussian approximations and tail probabilities.

Why It Matters

Understanding the steady-state behavior of stochastic approximation methods is crucial for improving the efficiency of machine learning algorithms. This research offers concrete bounds that can enhance the reliability of these methods, particularly in stochastic gradient descent and other applications, making it relevant for both theoretical and practical advancements in machine learning.

Key Takeaways

  • Establishes explicit error bounds for Gaussian approximations in constant-stepsize stochastic approximation.
  • Covers both i.i.d. and Markovian noise models, enhancing applicability.
  • Provides dimension- and stepsize-dependent bounds in Wasserstein distance.
  • Derives non-uniform Berry-Esseen-type tail bounds for steady-state probabilities.
  • Identifies a non-Gaussian limiting law under specific scaling conditions.

Computer Science > Machine Learning arXiv:2602.13960 (cs) [Submitted on 15 Feb 2026] Title:Steady-State Behavior of Constant-Stepsize Stochastic Approximation: Gaussian Approximation and Tail Bounds Authors:Zedong Wang, Yuyang Wang, Ijay Narang, Felix Wang, Yuzhou Wang, Siva Theja Maguluri View a PDF of the paper titled Steady-State Behavior of Constant-Stepsize Stochastic Approximation: Gaussian Approximation and Tail Bounds, by Zedong Wang and 5 other authors View PDF Abstract:Constant-stepsize stochastic approximation (SA) is widely used in learning for computational efficiency. For a fixed stepsize, the iterates typically admit a stationary distribution that is rarely tractable. Prior work shows that as the stepsize $\alpha \downarrow 0$, the centered-and-scaled steady state converges weakly to a Gaussian random vector. However, for fixed $\alpha$, this weak convergence offers no usable error bound for approximating the steady-state by its Gaussian limit. This paper provides explicit, non-asymptotic error bounds for fixed $\alpha$. We first prove general-purpose theorems that bound the Wasserstein distance between the centered-scaled steady state and an appropriate Gaussian distribution, under regularity conditions for drift and moment conditions for noise. To ensure broad applicability, we cover both i.i.d. and Markovian noise models. We then instantiate these theorems for three representative SA settings: (1) stochastic gradient descent (SGD) for smooth strongly conv...

Related Articles

Machine Learning

TMLR reviews stalled [D]

I submitted a regular submission (12 pages or less) to TMLR in February that had status change to “under review” 6 weeks ago. TMLR states...

Reddit - Machine Learning · 1 min ·
Top 10 AI certifications and courses for 2026
Ai Startups

Top 10 AI certifications and courses for 2026

This article reviews the top 10 AI certifications and courses for 2026, highlighting their significance in a rapidly evolving field and t...

AI Events · 15 min ·
Machine Learning

Artificial intelligence - Machine Learning, Robotics, Algorithms

AI Events ·
Machine Learning

Looking to join a team working on AI/CV research (aiming to publish) [R]

Hi, I am currently working as a research assistant in my college, but I want to do more serious research and learn more from it. I’m inte...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime