[2603.29466] An Isotropic Approach to Efficient Uncertainty Quantification with Gradient Norms

[2603.29466] An Isotropic Approach to Efficient Uncertainty Quantification with Gradient Norms

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2603.29466: An Isotropic Approach to Efficient Uncertainty Quantification with Gradient Norms

Computer Science > Machine Learning arXiv:2603.29466 (cs) [Submitted on 31 Mar 2026] Title:An Isotropic Approach to Efficient Uncertainty Quantification with Gradient Norms Authors:Nils Grünefeld, Jes Frellsen, Christian Hardmeier View a PDF of the paper titled An Isotropic Approach to Efficient Uncertainty Quantification with Gradient Norms, by Nils Gr\"unefeld and 2 other authors View PDF HTML (experimental) Abstract:Existing methods for quantifying predictive uncertainty in neural networks are either computationally intractable for large language models or require access to training data that is typically unavailable. We derive a lightweight alternative through two approximations: a first-order Taylor expansion that expresses uncertainty in terms of the gradient of the prediction and the parameter covariance, and an isotropy assumption on the parameter covariance. Together, these yield epistemic uncertainty as the squared gradient norm and aleatoric uncertainty as the Bernoulli variance of the point prediction, from a single forward-backward pass through an unmodified pretrained model. We justify the isotropy assumption by showing that covariance estimates built from non-training data introduce structured distortions that isotropic covariance avoids, and that theoretical results on the spectral properties of large networks support the approximation at scale. Validation against reference Markov Chain Monte Carlo estimates on synthetic problems shows strong correspondence...

Originally published on April 01, 2026. Curated by AI News.

Related Articles

Llms

GPT-4 vs Claude vs Gemini for coding — honest breakdown after 3 months of daily use

I am a solo developer who has been using all three seriously. Here is what I actually think: GPT-4o — Strengths: Large context window, st...

Reddit - Artificial Intelligence · 1 min ·
Llms

You're giving feedback on a new version of ChatGPT

So I will be paying attention to these system messages more now- the last time I got one of these not so long back the 'tone' changed to ...

Reddit - Artificial Intelligence · 1 min ·
Llms

Gemma 4 actually running usable on an Android phone (not llama.cpp)

I wanted a real local assistant on my phone, not a demo. First tried the usual llama.cpp in Termux — Gemma 4 was 2–3 tok/s and the phone ...

Reddit - Artificial Intelligence · 1 min ·
Llms

Claude vs Gemini: Solving the laden knight's tour problem

AI Coding contest day 8 The eighth challenge is a weighted variant of the classic knight's tour. The knight must visit every square of a ...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime