[2602.21426] Proximal-IMH: Proximal Posterior Proposals for Independent Metropolis-Hastings with Approximate Operators

[2602.21426] Proximal-IMH: Proximal Posterior Proposals for Independent Metropolis-Hastings with Approximate Operators

arXiv - Machine Learning 3 min read Article

Summary

The paper introduces Proximal-IMH, a novel sampling method for Bayesian inverse problems that enhances the efficiency of the Independent Metropolis-Hastings algorithm by correcting biases in approximate posterior distributions.

Why It Matters

This research addresses the challenge of sampling from complex posterior distributions in Bayesian inference, which is crucial for various applications in science and engineering. By improving acceptance rates and mixing, Proximal-IMH can lead to more accurate and efficient Bayesian analyses, making it a valuable contribution to the field of machine learning.

Key Takeaways

  • Proximal-IMH corrects biases in approximate posterior distributions.
  • The method improves acceptance rates and mixing in sampling.
  • It is applicable to both linear and nonlinear input-output operators.
  • Numerical experiments show Proximal-IMH outperforms existing IMH variants.
  • The approach is particularly useful for inverse problems where exact sampling is computationally expensive.

Computer Science > Machine Learning arXiv:2602.21426 (cs) [Submitted on 24 Feb 2026] Title:Proximal-IMH: Proximal Posterior Proposals for Independent Metropolis-Hastings with Approximate Operators Authors:Youguang Chen, George Biros View a PDF of the paper titled Proximal-IMH: Proximal Posterior Proposals for Independent Metropolis-Hastings with Approximate Operators, by Youguang Chen and 1 other authors View PDF HTML (experimental) Abstract:We consider the problem of sampling from a posterior distribution arising in Bayesian inverse problems in science, engineering, and imaging. Our method belongs to the family of independence Metropolis-Hastings (IMH) sampling algorithms, which are common in Bayesian inference. Relying on the existence of an approximate posterior distribution that is cheaper to sample from but may have significant bias, we introduce Proximal-IMH, a scheme that removes this bias by correcting samples from the approximate posterior through an auxiliary optimization problem. This yields a local adjustment that trades off adherence to the exact model against stability around the approximate reference point. For idealized settings, we prove that the proximal correction tightens the match between approximate and exact posteriors, thereby improving acceptance rates and mixing. The method applies to both linear and nonlinear input-output operators and is particularly suitable for inverse problems where exact posterior sampling is too expensive. We present numeri...

Related Articles

Yupp shuts down after raising $33M from a16z crypto's Chris Dixon | TechCrunch
Machine Learning

Yupp shuts down after raising $33M from a16z crypto's Chris Dixon | TechCrunch

Less than a year after launching, with checks from some of the biggest names in Silicon Valley, crowdsourced AI model feedback startup Yu...

TechCrunch - AI · 4 min ·
Machine Learning

[R] Fine-tuning services report

If you have some data and want to train or run a small custom model but don't have powerful enough hardware for training, fine-tuning ser...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] Does ML have a "bible"/reference textbook at the Intermediate/Advanced level?

Hello, everyone! This is my first time posting here and I apologise if the question is, perhaps, a bit too basic for this sub-reddit. A b...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] ICML 2026 review policy debate: 100 responses suggest Policy B may score higher, while Policy A shows higher confidence

A week ago I made a thread asking whether ICML 2026’s review policy might have affected review outcomes, especially whether Policy A pape...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime