[2602.19610] Variational Inference for Bayesian MIDAS Regression
Summary
This paper presents a Coordinate Ascent Variational Inference (CAVI) algorithm for Bayesian MIDAS regression, demonstrating significant speed improvements and accuracy in parameter estimation compared to traditional methods.
Why It Matters
The development of the CAVI algorithm addresses the limitations of existing Bayesian regression techniques, particularly in handling mixed data sampling. Its efficiency and accuracy make it a valuable tool for researchers and practitioners in machine learning and statistics, especially in financial modeling.
Key Takeaways
- CAVI achieves speedups of 107x to 1,772x compared to Gibbs sampling.
- The algorithm maintains high calibration for weight function parameters (coverage above 92%).
- CAVI produces posterior means nearly identical to block Gibbs sampler benchmarks.
- The method effectively propagates uncertainty across blocks, improving reliability.
- Empirical application shows CAVI's effectiveness in forecasting realized volatility on S&P 500 returns.
Computer Science > Machine Learning arXiv:2602.19610 (cs) [Submitted on 23 Feb 2026] Title:Variational Inference for Bayesian MIDAS Regression Authors:Luigi Simeone View a PDF of the paper titled Variational Inference for Bayesian MIDAS Regression, by Luigi Simeone View PDF HTML (experimental) Abstract:We develop a Coordinate Ascent Variational Inference (CAVI) algorithm for Bayesian Mixed Data Sampling (MIDAS) regression with linear weight parameteri zations. The model separates impact coe cients from weighting function parameters through a normalization constraint, creating a bilinear structure that renders generic Hamiltonian Monte Carlo samplers unreliable while preserving conditional conju gacy exploitable by CAVI. Each variational update admits a closed-form solution: Gaussian for regression coe cients and weight parameters, Inverse-Gamma for the error variance. The algorithm propagates uncertainty across blocks through second moments, distinguishing it from naive plug-in approximations. In a Monte Carlo study spanning 21 data-generating con gurations with up to 50 predictors, CAVI produces posterior means nearly identical to a block Gibbs sampler benchmark while achieving speedups of 107x to 1,772x (Table 9). Generic automatic di eren tiation VI (ADVI), by contrast, produces bias 714 times larger while being orders of magnitude slower, con rming the value of model-speci c derivations. Weight function parameters maintain excellent calibration (coverage above 92%) acr...