[2507.10345] Some Super-approximation Rates of ReLU Neural Networks for Korobov Functions
About this article
Abstract page for arXiv paper 2507.10345: Some Super-approximation Rates of ReLU Neural Networks for Korobov Functions
Computer Science > Machine Learning arXiv:2507.10345 (cs) [Submitted on 14 Jul 2025 (v1), last revised 5 Mar 2026 (this version, v2)] Title:Some Super-approximation Rates of ReLU Neural Networks for Korobov Functions Authors:Yuwen Li, Guozhi Zhang View a PDF of the paper titled Some Super-approximation Rates of ReLU Neural Networks for Korobov Functions, by Yuwen Li and 1 other authors View PDF HTML (experimental) Abstract:This paper examines the $L_p$ and $W^1_p$ norm approximation errors of ReLU neural networks for Korobov functions. In terms of network width and depth, we derive nearly optimal super-approximation error bounds of order $2m$ in the $L_p$ norm and order $2m-2$ in the $W^1_p$ norm, for target functions with $L_p$ mixed derivative of order $m$ in each direction. The analysis leverages sparse grid finite elements and the bit extraction technique. Our results improve upon classical lowest order $L_\infty$ and $H^1$ norm error bounds and demonstrate that the expressivity of neural networks is largely unaffected by the curse of dimensionality. Subjects: Machine Learning (cs.LG) Cite as: arXiv:2507.10345 [cs.LG] (or arXiv:2507.10345v2 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2507.10345 Focus to learn more arXiv-issued DOI via DataCite Submission history From: Guozhi Zhang [view email] [v1] Mon, 14 Jul 2025 14:48:47 UTC (152 KB) [v2] Thu, 5 Mar 2026 06:47:04 UTC (196 KB) Full-text links: Access Paper: View a PDF of the paper titled Some Super-a...