[2601.20888] Latent-IMH: Efficient Bayesian Inference for Inverse Problems with Approximate Operators
About this article
Abstract page for arXiv paper 2601.20888: Latent-IMH: Efficient Bayesian Inference for Inverse Problems with Approximate Operators
Statistics > Machine Learning arXiv:2601.20888 (stat) [Submitted on 28 Jan 2026 (v1), last revised 4 Mar 2026 (this version, v2)] Title:Latent-IMH: Efficient Bayesian Inference for Inverse Problems with Approximate Operators Authors:Youguang Chen, George Biros View a PDF of the paper titled Latent-IMH: Efficient Bayesian Inference for Inverse Problems with Approximate Operators, by Youguang Chen and 1 other authors View PDF HTML (experimental) Abstract:We study sampling from posterior distributions in Bayesian linear inverse problems where $A$, the parameters to observables operator, is computationally expensive. In many applications, $A$ can be factored in a manner that facilitates the construction of a cost-effective approximation $\tilde{A}$. In this framework, we introduce Latent-IMH, a sampling method based on the Metropolis-Hastings independence (IMH) sampler. Latent-IMH first generates intermediate latent variables using the approximate $\tilde{A}$, and then refines them using the exact $A$. Its primary benefit is that it shifts the computational cost to an offline phase. We theoretically analyze the performance of Latent-IMH using KL divergence and mixing time bounds. Using numerical experiments on several model problems, we show that, under reasonable assumptions, it outperforms state-of-the-art methods such as the No-U-Turn sampler (NUTS) in computational efficiency. In some cases, Latent-IMH can be orders of magnitude faster than existing schemes. Subjects: Mach...