[2602.19964] On the Equivalence of Random Network Distillation, Deep Ensembles, and Bayesian Inference
Summary
This paper establishes theoretical connections between Random Network Distillation (RND), Deep Ensembles, and Bayesian Inference, enhancing uncertainty quantification in deep learning.
Why It Matters
Understanding the equivalence of RND, Deep Ensembles, and Bayesian Inference is crucial for improving uncertainty quantification methods in deep learning. This paper provides a theoretical foundation that could lead to more efficient and reliable models, which is essential for safe AI deployments.
Key Takeaways
- RND's uncertainty signal is equivalent to the predictive variance of deep ensembles.
- A specific RND target function can mirror Bayesian inference's posterior predictive distribution.
- The findings offer a unified theoretical perspective for uncertainty quantification methods.
Computer Science > Machine Learning arXiv:2602.19964 (cs) [Submitted on 23 Feb 2026] Title:On the Equivalence of Random Network Distillation, Deep Ensembles, and Bayesian Inference Authors:Moritz A. Zanger, Yijun Wu, Pascal R. Van der Vaart, Wendelin Böhmer, Matthijs T. J. Spaan View a PDF of the paper titled On the Equivalence of Random Network Distillation, Deep Ensembles, and Bayesian Inference, by Moritz A. Zanger and 4 other authors View PDF HTML (experimental) Abstract:Uncertainty quantification is central to safe and efficient deployments of deep learning models, yet many computationally practical methods lack lacking rigorous theoretical motivation. Random network distillation (RND) is a lightweight technique that measures novelty via prediction errors against a fixed random target. While empirically effective, it has remained unclear what uncertainties RND measures and how its estimates relate to other approaches, e.g. Bayesian inference or deep ensembles. This paper establishes these missing theoretical connections by analyzing RND within the neural tangent kernel framework in the limit of infinite network width. Our analysis reveals two central findings in this limit: (1) The uncertainty signal from RND -- its squared self-predictive error -- is equivalent to the predictive variance of a deep ensemble. (2) By constructing a specific RND target function, we show that the RND error distribution can be made to mirror the centered posterior predictive distribution o...