[2505.12096] When Bias Meets Trainability: Connecting Theories of Initialization
About this article
Abstract page for arXiv paper 2505.12096: When Bias Meets Trainability: Connecting Theories of Initialization
Computer Science > Machine Learning arXiv:2505.12096 (cs) [Submitted on 17 May 2025 (v1), last revised 28 Feb 2026 (this version, v4)] Title:When Bias Meets Trainability: Connecting Theories of Initialization Authors:Alberto Bassi, Marco Baity-Jesi, Aurelien Lucchi, Carlo Albert, Emanuele Francazi View a PDF of the paper titled When Bias Meets Trainability: Connecting Theories of Initialization, by Alberto Bassi and 3 other authors View PDF HTML (experimental) Abstract:The statistical properties of deep neural networks (DNNs) at initialization play an important role to comprehend their trainability and the intrinsic architectural biases they possess before data exposure Well established mean field (MF) theories have uncovered that the distribution of parameters of randomly initialized networks strongly influences the behavior of the gradients, dictating whether they explode or vanish. Recent work has showed that untrained DNNs also manifest an initial guessing bias (IGB), in which large regions of the input space are assigned to a single class. In this work, we provide a theoretical proof that links IGB to previous MF theories for a vast class of DNNs, showing that efficient learning is tightly connected to a network prejudice towards a specific class. This connection leads to a counterintuitive conclusion: the initialization that optimizes trainability is systematically biased rather than neutral. Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Machin...