[2602.22428] Calibrated Test-Time Guidance for Bayesian Inference
Summary
This paper introduces a method for calibrated test-time guidance in Bayesian inference, addressing issues with existing approaches that miscalibrate inference by focusing solely on maximizing rewards.
Why It Matters
The research is significant as it proposes a solution to the common problem of miscalibrated inference in Bayesian models, which can lead to inaccurate results in critical applications such as scientific imaging and decision-making processes. By improving the calibration of sampling methods, this work enhances the reliability of Bayesian inference, which is foundational in many AI and machine learning applications.
Key Takeaways
- Existing test-time guidance methods often miscalibrate Bayesian inference.
- The authors identify structural approximations that lead to this miscalibration.
- Proposed alternative estimators enable more accurate sampling from the Bayesian posterior.
- The new method outperforms previous techniques in various Bayesian inference tasks.
- Results match state-of-the-art performance in black hole image reconstruction.
Computer Science > Machine Learning arXiv:2602.22428 (cs) [Submitted on 25 Feb 2026] Title:Calibrated Test-Time Guidance for Bayesian Inference Authors:Daniel Geyfman, Felix Draxler, Jan Groeneveld, Hyunsoo Lee, Theofanis Karaletsos, Stephan Mandt View a PDF of the paper titled Calibrated Test-Time Guidance for Bayesian Inference, by Daniel Geyfman and Felix Draxler and Jan Groeneveld and Hyunsoo Lee and Theofanis Karaletsos and Stephan Mandt View PDF HTML (experimental) Abstract:Test-time guidance is a widely used mechanism for steering pretrained diffusion models toward outcomes specified by a reward function. Existing approaches, however, focus on maximizing reward rather than sampling from the true Bayesian posterior, leading to miscalibrated inference. In this work, we show that common test-time guidance methods do not recover the correct posterior distribution and identify the structural approximations responsible for this failure. We then propose consistent alternative estimators that enable calibrated sampling from the Bayesian posterior. We significantly outperform previous methods on a set of Bayesian inference tasks, and match state-of-the-art in black hole image reconstruction. Comments: Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI) Cite as: arXiv:2602.22428 [cs.LG] (or arXiv:2602.22428v1 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2602.22428 Focus to learn more arXiv-issued DOI via DataCite (pending registration) Submissi...