[2601.14026] Universal Approximation Theorem for Input-Connected Multilayer Perceptrons
About this article
Abstract page for arXiv paper 2601.14026: Universal Approximation Theorem for Input-Connected Multilayer Perceptrons
Computer Science > Machine Learning arXiv:2601.14026 (cs) [Submitted on 20 Jan 2026 (v1), last revised 24 Mar 2026 (this version, v2)] Title:Universal Approximation Theorem for Input-Connected Multilayer Perceptrons Authors:Vugar Ismailov View a PDF of the paper titled Universal Approximation Theorem for Input-Connected Multilayer Perceptrons, by Vugar Ismailov View PDF HTML (experimental) Abstract:We present the Input-Connected Multilayer Perceptron (IC-MLP), a feedforward neural network architecture in which each hidden neuron receives, in addition to the outputs of the preceding layer, a direct affine connection from the raw input. We first study this architecture in the univariate setting and give an explicit and systematic description of IC-MLPs with an arbitrary finite number of hidden layers, including iterated formulas for the network functions. In this setting, we prove a universal approximation theorem showing that deep IC-MLPs can approximate any continuous function on a closed interval of the real line if and only if the activation function is nonlinear. We then extend the analysis to vector-valued inputs and establish a corresponding universal approximation theorem for continuous functions on compact subsets of $\mathbb{R}^n$. Comments: Subjects: Machine Learning (cs.LG); Neural and Evolutionary Computing (cs.NE); Functional Analysis (math.FA) Cite as: arXiv:2601.14026 [cs.LG] (or arXiv:2601.14026v2 [cs.LG] for this version) https://doi.org/10.48550/arXiv....