[2501.04272] On weight and variance uncertainty in neural networks for regression tasks
About this article
Abstract page for arXiv paper 2501.04272: On weight and variance uncertainty in neural networks for regression tasks
Statistics > Machine Learning arXiv:2501.04272 (stat) [Submitted on 8 Jan 2025 (v1), last revised 28 Feb 2026 (this version, v2)] Title:On weight and variance uncertainty in neural networks for regression tasks Authors:Moein Monemi, Morteza Amini, S. Mahmoud Taheri, Mohammad Arashi View a PDF of the paper titled On weight and variance uncertainty in neural networks for regression tasks, by Moein Monemi and 3 other authors View PDF HTML (experimental) Abstract:We investigate the problem of weight uncertainty originally proposed by [Blundell et al. (2015). Weight uncertainty in neural networks. In International conference on machine learning, 1613-1622, PMLR.] in the context of neural networks designed for regression tasks, and we extend their framework by incorporating variance uncertainty into the model. Our analysis demonstrates that explicitly modeling uncertainty in the variance parameter can significantly enhance the predictive performance of Bayesian neural networks. By considering a full posterior distribution over the variance, the model achieves improved generalization compared to approaches that treat variance as fixed or deterministic. We evaluate the generalization capability of our proposed approach through a function approximation example and further validate it on the riboflavin genetic dataset. Our exploration encompasses both fully connected dense networks and dropout neural networks, employing Gaussian and spike-and-slab priors respectively for the network...