[2307.07753] Learning Expressive Priors for Generalization and Uncertainty Estimation in Neural Networks
About this article
Abstract page for arXiv paper 2307.07753: Learning Expressive Priors for Generalization and Uncertainty Estimation in Neural Networks
Computer Science > Machine Learning arXiv:2307.07753 (cs) [Submitted on 15 Jul 2023 (v1), last revised 30 Mar 2026 (this version, v2)] Title:Learning Expressive Priors for Generalization and Uncertainty Estimation in Neural Networks Authors:Dominik Schnaus, Jongseok Lee, Daniel Cremers, Rudolph Triebel View a PDF of the paper titled Learning Expressive Priors for Generalization and Uncertainty Estimation in Neural Networks, by Dominik Schnaus and 3 other authors View PDF HTML (experimental) Abstract:In this work, we propose a novel prior learning method for advancing generalization and uncertainty estimation in deep neural networks. The key idea is to exploit scalable and structured posteriors of neural networks as informative priors with generalization guarantees. Our learned priors provide expressive probabilistic representations at large scale, like Bayesian counterparts of pre-trained models on ImageNet, and further produce non-vacuous generalization bounds. We also extend this idea to a continual learning framework, where the favorable properties of our priors are desirable. Major enablers are our technical contributions: (1) the sums-of-Kronecker-product computations, and (2) the derivations and optimizations of tractable objectives that lead to improved generalization bounds. Empirically, we exhaustively show the effectiveness of this method for uncertainty estimation and generalization. Comments: Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI); ...