[2603.29466] An Isotropic Approach to Efficient Uncertainty Quantification with Gradient Norms
About this article
Abstract page for arXiv paper 2603.29466: An Isotropic Approach to Efficient Uncertainty Quantification with Gradient Norms
Computer Science > Machine Learning arXiv:2603.29466 (cs) [Submitted on 31 Mar 2026] Title:An Isotropic Approach to Efficient Uncertainty Quantification with Gradient Norms Authors:Nils Grünefeld, Jes Frellsen, Christian Hardmeier View a PDF of the paper titled An Isotropic Approach to Efficient Uncertainty Quantification with Gradient Norms, by Nils Gr\"unefeld and 2 other authors View PDF HTML (experimental) Abstract:Existing methods for quantifying predictive uncertainty in neural networks are either computationally intractable for large language models or require access to training data that is typically unavailable. We derive a lightweight alternative through two approximations: a first-order Taylor expansion that expresses uncertainty in terms of the gradient of the prediction and the parameter covariance, and an isotropy assumption on the parameter covariance. Together, these yield epistemic uncertainty as the squared gradient norm and aleatoric uncertainty as the Bernoulli variance of the point prediction, from a single forward-backward pass through an unmodified pretrained model. We justify the isotropy assumption by showing that covariance estimates built from non-training data introduce structured distortions that isotropic covariance avoids, and that theoretical results on the spectral properties of large networks support the approximation at scale. Validation against reference Markov Chain Monte Carlo estimates on synthetic problems shows strong correspondence...