[2602.19610] Variational Inference for Bayesian MIDAS Regression

[2602.19610] Variational Inference for Bayesian MIDAS Regression

arXiv - Machine Learning 4 min read Article

Summary

This paper presents a Coordinate Ascent Variational Inference (CAVI) algorithm for Bayesian MIDAS regression, demonstrating significant speed improvements and accuracy in parameter estimation compared to traditional methods.

Why It Matters

The development of the CAVI algorithm addresses the limitations of existing Bayesian regression techniques, particularly in handling mixed data sampling. Its efficiency and accuracy make it a valuable tool for researchers and practitioners in machine learning and statistics, especially in financial modeling.

Key Takeaways

  • CAVI achieves speedups of 107x to 1,772x compared to Gibbs sampling.
  • The algorithm maintains high calibration for weight function parameters (coverage above 92%).
  • CAVI produces posterior means nearly identical to block Gibbs sampler benchmarks.
  • The method effectively propagates uncertainty across blocks, improving reliability.
  • Empirical application shows CAVI's effectiveness in forecasting realized volatility on S&P 500 returns.

Computer Science > Machine Learning arXiv:2602.19610 (cs) [Submitted on 23 Feb 2026] Title:Variational Inference for Bayesian MIDAS Regression Authors:Luigi Simeone View a PDF of the paper titled Variational Inference for Bayesian MIDAS Regression, by Luigi Simeone View PDF HTML (experimental) Abstract:We develop a Coordinate Ascent Variational Inference (CAVI) algorithm for Bayesian Mixed Data Sampling (MIDAS) regression with linear weight parameteri zations. The model separates impact coe cients from weighting function parameters through a normalization constraint, creating a bilinear structure that renders generic Hamiltonian Monte Carlo samplers unreliable while preserving conditional conju gacy exploitable by CAVI. Each variational update admits a closed-form solution: Gaussian for regression coe cients and weight parameters, Inverse-Gamma for the error variance. The algorithm propagates uncertainty across blocks through second moments, distinguishing it from naive plug-in approximations. In a Monte Carlo study spanning 21 data-generating con gurations with up to 50 predictors, CAVI produces posterior means nearly identical to a block Gibbs sampler benchmark while achieving speedups of 107x to 1,772x (Table 9). Generic automatic di eren tiation VI (ADVI), by contrast, produces bias 714 times larger while being orders of magnitude slower, con rming the value of model-speci c derivations. Weight function parameters maintain excellent calibration (coverage above 92%) acr...

Related Articles

Llms

[P] Dante-2B: I'm training a 2.1B bilingual fully open Italian/English LLM from scratch on 2×H200. Phase 1 done — here's what I've built.

The problem If you work with Italian text and local models, you know the pain. Every open-source LLM out there treats Italian as an after...

Reddit - Machine Learning · 1 min ·
Machine Learning

[R] Architecture Determines Optimization: Deriving Weight Updates from Network Topology (seeking arXiv endorsement - cs.LG)

Abstract: We derive neural network weight updates from first principles without assuming gradient descent or a specific loss function. St...

Reddit - Machine Learning · 1 min ·
Machine Learning

[P] ML project (XGBoost + Databricks + MLflow) — how to talk about “production issues” in interviews?

Hey all, I recently built an end-to-end fraud detection project using a large banking dataset: Trained an XGBoost model Used Databricks f...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] The memory chip market lost tens of billions over a paper this community would have understood in 10 minutes

TurboQuant was teased recently and tens of billions gone from memory chip market in 48 hours but anyone in this community who read the pa...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime