[2602.21426] Proximal-IMH: Proximal Posterior Proposals for Independent Metropolis-Hastings with Approximate Operators
Summary
The paper introduces Proximal-IMH, a novel sampling method for Bayesian inverse problems that enhances the efficiency of the Independent Metropolis-Hastings algorithm by correcting biases in approximate posterior distributions.
Why It Matters
This research addresses the challenge of sampling from complex posterior distributions in Bayesian inference, which is crucial for various applications in science and engineering. By improving acceptance rates and mixing, Proximal-IMH can lead to more accurate and efficient Bayesian analyses, making it a valuable contribution to the field of machine learning.
Key Takeaways
- Proximal-IMH corrects biases in approximate posterior distributions.
- The method improves acceptance rates and mixing in sampling.
- It is applicable to both linear and nonlinear input-output operators.
- Numerical experiments show Proximal-IMH outperforms existing IMH variants.
- The approach is particularly useful for inverse problems where exact sampling is computationally expensive.
Computer Science > Machine Learning arXiv:2602.21426 (cs) [Submitted on 24 Feb 2026] Title:Proximal-IMH: Proximal Posterior Proposals for Independent Metropolis-Hastings with Approximate Operators Authors:Youguang Chen, George Biros View a PDF of the paper titled Proximal-IMH: Proximal Posterior Proposals for Independent Metropolis-Hastings with Approximate Operators, by Youguang Chen and 1 other authors View PDF HTML (experimental) Abstract:We consider the problem of sampling from a posterior distribution arising in Bayesian inverse problems in science, engineering, and imaging. Our method belongs to the family of independence Metropolis-Hastings (IMH) sampling algorithms, which are common in Bayesian inference. Relying on the existence of an approximate posterior distribution that is cheaper to sample from but may have significant bias, we introduce Proximal-IMH, a scheme that removes this bias by correcting samples from the approximate posterior through an auxiliary optimization problem. This yields a local adjustment that trades off adherence to the exact model against stability around the approximate reference point. For idealized settings, we prove that the proximal correction tightens the match between approximate and exact posteriors, thereby improving acceptance rates and mixing. The method applies to both linear and nonlinear input-output operators and is particularly suitable for inverse problems where exact posterior sampling is too expensive. We present numeri...