[2602.21408] Generative Bayesian Computation as a Scalable Alternative to Gaussian Process Surrogates
Summary
This article presents Generative Bayesian Computation (GBC) as a scalable alternative to Gaussian Process (GP) surrogates, addressing limitations in computational cost and predictive accuracy through innovative methods.
Why It Matters
The research highlights a significant advancement in machine learning methodologies, offering a more efficient framework for emulating complex computer experiments. By improving predictive performance and scalability, GBC could enhance applications in various fields, including statistics and computational modeling.
Key Takeaways
- GBC offers a scalable solution to the limitations of Gaussian Process surrogates.
- Improvements in Continuous Ranked Probability Score (CRPS) range from 11% to 46% in various benchmarks.
- GBC can handle up to 90,000 training points, where traditional GP methods struggle.
- In active learning scenarios, GBC significantly reduces root mean square error (RMSE) compared to deep GP methods.
- GPs still perform better on smooth surfaces due to their inherent regularization capabilities.
Computer Science > Machine Learning arXiv:2602.21408 (cs) [Submitted on 24 Feb 2026] Title:Generative Bayesian Computation as a Scalable Alternative to Gaussian Process Surrogates Authors:Nick Polson, Vadim Sokolov View a PDF of the paper titled Generative Bayesian Computation as a Scalable Alternative to Gaussian Process Surrogates, by Nick Polson and Vadim Sokolov View PDF HTML (experimental) Abstract:Gaussian process (GP) surrogates are the default tool for emulating expensive computer experiments, but cubic cost, stationarity assumptions, and Gaussian predictive distributions limit their reach. We propose Generative Bayesian Computation (GBC) via Implicit Quantile Networks (IQNs) as a surrogate framework that targets all three limitations. GBC learns the full conditional quantile function from input--output pairs; at test time, a single forward pass per quantile level produces draws from the predictive distribution. Across fourteen benchmarks we compare GBC to four GP-based methods. GBC improves CRPS by 11--26\% on piecewise jump-process benchmarks, by 14\% on a ten-dimensional Friedman function, and scales linearly to 90,000 training points where dense-covariance GPs are infeasible. A boundary-augmented variant matches or outperforms Modular Jump GPs on two-dimensional jump datasets (up to 46\% CRPS improvement). In active learning, a randomized-prior IQN ensemble achieves nearly three times lower RMSE than deep GP active learning on Rocket LGBB. Overall, GBC records ...