[2307.07753] Learning Expressive Priors for Generalization and Uncertainty Estimation in Neural Networks

[2307.07753] Learning Expressive Priors for Generalization and Uncertainty Estimation in Neural Networks

arXiv - Machine Learning 3 min read

About this article

Abstract page for arXiv paper 2307.07753: Learning Expressive Priors for Generalization and Uncertainty Estimation in Neural Networks

Computer Science > Machine Learning arXiv:2307.07753 (cs) [Submitted on 15 Jul 2023 (v1), last revised 30 Mar 2026 (this version, v2)] Title:Learning Expressive Priors for Generalization and Uncertainty Estimation in Neural Networks Authors:Dominik Schnaus, Jongseok Lee, Daniel Cremers, Rudolph Triebel View a PDF of the paper titled Learning Expressive Priors for Generalization and Uncertainty Estimation in Neural Networks, by Dominik Schnaus and 3 other authors View PDF HTML (experimental) Abstract:In this work, we propose a novel prior learning method for advancing generalization and uncertainty estimation in deep neural networks. The key idea is to exploit scalable and structured posteriors of neural networks as informative priors with generalization guarantees. Our learned priors provide expressive probabilistic representations at large scale, like Bayesian counterparts of pre-trained models on ImageNet, and further produce non-vacuous generalization bounds. We also extend this idea to a continual learning framework, where the favorable properties of our priors are desirable. Major enablers are our technical contributions: (1) the sums-of-Kronecker-product computations, and (2) the derivations and optimizations of tractable objectives that lead to improved generalization bounds. Empirically, we exhaustively show the effectiveness of this method for uncertainty estimation and generalization. Comments: Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI); ...

Originally published on March 31, 2026. Curated by AI News.

Related Articles

How Dangerous Is Anthropic’s New AI Model? Its Chief Science Officer Explains.
Machine Learning

How Dangerous Is Anthropic’s New AI Model? Its Chief Science Officer Explains.

Anthropic says Mythos is so dangerous that the company is slowing its release. We asked Jared Kaplan why.

AI Tools & Products · 3 min ·
Llms

Built an political benchmark for LLMs. KIMI K2 can't answer about Taiwan (Obviously). GPT-5.3 refuses 100% of questions when given an opt-out. [P]

I spent the few days building a benchmark that maps where frontier LLMs fall on a 2D political compass (economic left/right + social prog...

Reddit - Machine Learning · 1 min ·
UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Improving AI models’ ability to explain their predictions
Machine Learning

Improving AI models’ ability to explain their predictions

AI News - General · 9 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime