[2603.04807] The Inductive Bias of Convolutional Neural Networks: Locality and Weight Sharing Reshape Implicit Regularization
About this article
Abstract page for arXiv paper 2603.04807: The Inductive Bias of Convolutional Neural Networks: Locality and Weight Sharing Reshape Implicit Regularization
Statistics > Machine Learning arXiv:2603.04807 (stat) [Submitted on 5 Mar 2026] Title:The Inductive Bias of Convolutional Neural Networks: Locality and Weight Sharing Reshape Implicit Regularization Authors:Tongtong Liang, Esha Singh, Rahul Parhi, Alexander Cloninger, Yu-Xiang Wang View a PDF of the paper titled The Inductive Bias of Convolutional Neural Networks: Locality and Weight Sharing Reshape Implicit Regularization, by Tongtong Liang and 4 other authors View PDF HTML (experimental) Abstract:We study how architectural inductive bias reshapes the implicit regularization induced by the edge-of-stability phenomenon in gradient descent. Prior work has established that for fully connected networks, the strength of this regularization is governed solely by the global input geometry; consequently, it is insufficient to prevent overfitting on difficult distributions such as the high-dimensional sphere. In this paper, we show that locality and weight sharing fundamentally change this picture. Specifically, we prove that provided the receptive field size $m$ remains small relative to the ambient dimension $d$, these networks generalize on spherical data with a rate of $n^{-\frac{1}{6} +O(m/d)}$, a regime where fully connected networks provably fail. This theoretical result confirms that weight sharing couples the learned filters to the low-dimensional patch manifold, thereby bypassing the high dimensionality of the ambient space. We further corroborate our theory by analyzing...