[2603.26418] Kantorovich--Kernel Neural Operators: Approximation Theory, Asymptotics, and Neural Network Interpretation
About this article
Abstract page for arXiv paper 2603.26418: Kantorovich--Kernel Neural Operators: Approximation Theory, Asymptotics, and Neural Network Interpretation
Statistics > Machine Learning arXiv:2603.26418 (stat) [Submitted on 27 Mar 2026] Title:Kantorovich--Kernel Neural Operators: Approximation Theory, Asymptotics, and Neural Network Interpretation Authors:Tian-Xiao He View a PDF of the paper titled Kantorovich--Kernel Neural Operators: Approximation Theory, Asymptotics, and Neural Network Interpretation, by Tian-Xiao He View PDF HTML (experimental) Abstract:This paper studies a class of multivariate Kantorovich-kernel neural network operators, including the deep Kantorovich-type neural network operators studied by Sharma and Singh. We prove density results, establish quantitative convergence estimates, derive Voronovskaya-type theorems, analyze the limits of partial differential equations for deep composite operators, prove Korovkin-type theorems, and propose inversion theorems. This paper studies a class of multivariate Kantorovich-kernel neural network operators, including the deep Kantorovich-type neural network operators studied by Sharma and Singh. We prove density results, establish quantitative convergence estimates, derive Voronovskaya-type theorems, analyze the limits of partial differential equations for deep composite operators, prove Korovkin-type theorems, and propose inversion theorems. Furthermore, this paper discusses the connection between neural network architectures and the classical positive operators proposed by Chui, Hsu, He, Lorentz, and Korovkin. Subjects: Machine Learning (stat.ML); Machine Learning (...