[2509.24544] Quantitative convergence of trained single layer neural networks to Gaussian processes

[2509.24544] Quantitative convergence of trained single layer neural networks to Gaussian processes

arXiv - Machine Learning 3 min read

About this article

Abstract page for arXiv paper 2509.24544: Quantitative convergence of trained single layer neural networks to Gaussian processes

Statistics > Machine Learning arXiv:2509.24544 (stat) [Submitted on 29 Sep 2025 (v1), last revised 5 Mar 2026 (this version, v3)] Title:Quantitative convergence of trained single layer neural networks to Gaussian processes Authors:Eloy Mosig, Andrea Agazzi, Dario Trevisan View a PDF of the paper titled Quantitative convergence of trained single layer neural networks to Gaussian processes, by Eloy Mosig and 2 other authors View PDF HTML (experimental) Abstract:In this paper, we study the quantitative convergence of shallow neural networks trained via gradient descent to their associated Gaussian processes in the infinite-width limit. While previous work has established qualitative convergence under broad settings, precise, finite-width estimates remain limited, particularly during training. We provide explicit upper bounds on the quadratic Wasserstein distance between the network output and its Gaussian approximation at any training time $t \ge 0$, demonstrating polynomial decay with network width. Our results quantify how architectural parameters, such as width and input dimension, influence convergence, and how training dynamics affect the approximation error. Comments: Subjects: Machine Learning (stat.ML); Machine Learning (cs.LG); Probability (math.PR) Cite as: arXiv:2509.24544 [stat.ML]   (or arXiv:2509.24544v3 [stat.ML] for this version)   https://doi.org/10.48550/arXiv.2509.24544 Focus to learn more arXiv-issued DOI via DataCite Submission history From: Eloy Mosig [v...

Originally published on March 06, 2026. Curated by AI News.

Related Articles

AI chip startup Rebellions raises $400 million at $2.3B valuation in pre-IPO round | TechCrunch
Machine Learning

AI chip startup Rebellions raises $400 million at $2.3B valuation in pre-IPO round | TechCrunch

The startup, which is planning to go public later this year, designs chips specifically for AI inference, another challenger to Nvidia's ...

TechCrunch - AI · 4 min ·
Llms

CLI for Google AI Search (gai.google) — run AI-powered code/tech searches headlessly from your terminal

Google AI (gai.google) gives Gemini-powered answers for technical queries — think AI-enhanced search with code understanding. I built a C...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

Big increase in the amount of people using AI to write their replies with AI

I find it interesting that we’ve all randomly decided to use the “-“ more often recently on reddit, and everyone’s grammar has drasticall...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

[D] MXFP8 GEMM: Up to 99% of cuBLAS performance using CUDA + PTX

New blog post by Daniel Vega-Myhre (Meta/PyTorch) illustrating GEMM design for FP8, including deep-dives into all the constraints and des...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime