[2602.14934] Activation-Space Uncertainty Quantification for Pretrained Networks

[2602.14934] Activation-Space Uncertainty Quantification for Pretrained Networks

arXiv - Machine Learning 3 min read Article

Summary

The paper presents Gaussian Process Activations (GAPA), a novel method for uncertainty quantification in pretrained networks, enhancing efficiency without altering predictions.

Why It Matters

Reliable uncertainty estimates are essential for deploying AI models safely. GAPA offers a solution that avoids the computational costs of traditional methods, making it easier to implement robust uncertainty quantification in various applications, including regression and classification tasks.

Key Takeaways

  • GAPA shifts Bayesian modeling from weights to activations for better uncertainty quantification.
  • The method preserves original predictions while providing closed-form epistemic variances.
  • GAPA is efficient, requiring no sampling or second-order computations, making it suitable for modern architectures.
  • It outperforms existing post-hoc methods in calibration and out-of-distribution detection.
  • Applicable across various domains including regression, classification, and language modeling.

Statistics > Machine Learning arXiv:2602.14934 (stat) [Submitted on 16 Feb 2026] Title:Activation-Space Uncertainty Quantification for Pretrained Networks Authors:Richard Bergna, Stefan Depeweg, Sergio Calvo-Ordoñez, Jonathan Plenk, Alvaro Cartea, Jose Miguel Hernández-Lobato View a PDF of the paper titled Activation-Space Uncertainty Quantification for Pretrained Networks, by Richard Bergna and 5 other authors View PDF HTML (experimental) Abstract:Reliable uncertainty estimates are crucial for deploying pretrained models; yet, many strong methods for quantifying uncertainty require retraining, Monte Carlo sampling, or expensive second-order computations and may alter a frozen backbone's predictions. To address this, we introduce Gaussian Process Activations (GAPA), a post-hoc method that shifts Bayesian modeling from weights to activations. GAPA replaces standard nonlinearities with Gaussian-process activations whose posterior mean exactly matches the original activation, preserving the backbone's point predictions by construction while providing closed-form epistemic variances in activation space. To scale to modern architectures, we use a sparse variational inducing-point approximation over cached training activations, combined with local k-nearest-neighbor subset conditioning, enabling deterministic single-pass uncertainty propagation without sampling, backpropagation, or second-order information. Across regression, classification, image segmentation, and language mode...

Related Articles

UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Alabama A&M University chosen for Amazon Web Services AI training program
Machine Learning

Alabama A&M University chosen for Amazon Web Services AI training program

Alabama A&M University has been selected as one of just five institutions nationwide to participate in Amazon Web Services' Machine Learn...

AI News - General · 2 min ·
Interpretable machine learning model advances analysis of complex genetic traits
Machine Learning

Interpretable machine learning model advances analysis of complex genetic traits

A new study published in Genome Research presents an interpretable artificial intelligence framework that improves both the accuracy and ...

AI News - General · 6 min ·
Sam Altman's Coworkers Say He Can Barely Code and Misunderstands Basic Machine Learning Concepts
Machine Learning

Sam Altman's Coworkers Say He Can Barely Code and Misunderstands Basic Machine Learning Concepts

The OpenAI CEO reportedly confuses basic coding and machine learning terms, numerous insiders have admitted.

AI News - General · 2 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime