[2604.04736] Sampling Parallelism for Fast and Efficient Bayesian Learning
About this article
Abstract page for arXiv paper 2604.04736: Sampling Parallelism for Fast and Efficient Bayesian Learning
Computer Science > Machine Learning arXiv:2604.04736 (cs) [Submitted on 6 Apr 2026] Title:Sampling Parallelism for Fast and Efficient Bayesian Learning Authors:Asena Karolin Özdemir, Lars H. Heyen, Arvid Weyrauch, Achim Streit, Markus Götz, Charlotte Debus View a PDF of the paper titled Sampling Parallelism for Fast and Efficient Bayesian Learning, by Asena Karolin \"Ozdemir and 5 other authors View PDF Abstract:Machine learning models, and deep neural networks in particular, are increasingly deployed in risk-sensitive domains such as healthcare, environmental forecasting, and finance, where reliable quantification of predictive uncertainty is essential. However, many uncertainty quantification (UQ) methods remain difficult to apply due to their substantial computational cost. Sampling-based Bayesian learning approaches, such as Bayesian neural networks (BNNs), are particularly expensive since drawing and evaluating multiple parameter samples rapidly exhausts memory and compute resources. These constraints have limited the accessibility and exploration of Bayesian techniques thus far. To address these challenges, we introduce sampling parallelism, a simple yet powerful parallelization strategy that targets the primary bottleneck of sampling-based Bayesian learning: the samples themselves. By distributing sample evaluations across multiple GPUs, our method reduces memory pressure and training time without requiring architectural changes or extensive hyperparameter tuning. W...