[2603.22030] On the Interplay of Priors and Overparametrization in Bayesian Neural Network Posteriors
About this article
Abstract page for arXiv paper 2603.22030: On the Interplay of Priors and Overparametrization in Bayesian Neural Network Posteriors
Computer Science > Machine Learning arXiv:2603.22030 (cs) [Submitted on 23 Mar 2026] Title:On the Interplay of Priors and Overparametrization in Bayesian Neural Network Posteriors Authors:Julius Kobialka, Emanuel Sommer, Chris Kolb, Juntae Kwon, Daniel Dold, David Rügamer View a PDF of the paper titled On the Interplay of Priors and Overparametrization in Bayesian Neural Network Posteriors, by Julius Kobialka and 5 other authors View PDF HTML (experimental) Abstract:Bayesian neural network (BNN) posteriors are often considered impractical for inference, as symmetries fragment them, non-identifiabilities inflate dimensionality, and weight-space priors are seen as meaningless. In this work, we study how overparametrization and priors together reshape BNN posteriors and derive implications allowing us to better understand their interplay. We show that redundancy introduces three key phenomena that fundamentally reshape the posterior geometry: balancedness, weight reallocation on equal-probability manifolds, and prior conformity. We validate our findings through extensive experiments with posterior sampling budgets that far exceed those of earlier works, and demonstrate how overparametrization induces structured, prior-aligned weight posterior distributions. Comments: Subjects: Machine Learning (cs.LG); Machine Learning (stat.ML) Cite as: arXiv:2603.22030 [cs.LG] (or arXiv:2603.22030v1 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2603.22030 Focus to learn more ...