[2602.21357] Conditional neural control variates for variance reduction in Bayesian inverse problems

[2602.21357] Conditional neural control variates for variance reduction in Bayesian inverse problems

arXiv - Machine Learning 4 min read Article

Summary

This paper presents a novel approach using conditional neural control variates to reduce variance in Bayesian inverse problems, enhancing Monte Carlo estimation efficiency.

Why It Matters

The study addresses a significant challenge in Bayesian inference for inverse problems, particularly in high-dimensional scenarios where traditional Monte Carlo methods can be inefficient. By introducing a modular method that leverages neural networks, this research has the potential to improve computational efficiency in various applications, including physics-based modeling.

Key Takeaways

  • Introduces conditional neural control variates for variance reduction in Bayesian inference.
  • Utilizes Stein's identity for scalable high-dimensional problem-solving.
  • Demonstrates substantial variance reduction in Darcy flow inverse problems.

Statistics > Machine Learning arXiv:2602.21357 (stat) [Submitted on 24 Feb 2026] Title:Conditional neural control variates for variance reduction in Bayesian inverse problems Authors:Ali Siahkoohi, Hyunwoo Oh View a PDF of the paper titled Conditional neural control variates for variance reduction in Bayesian inverse problems, by Ali Siahkoohi and Hyunwoo Oh View PDF HTML (experimental) Abstract:Bayesian inference for inverse problems involves computing expectations under posterior distributions -- e.g., posterior means, variances, or predictive quantities -- typically via Monte Carlo (MC) estimation. When the quantity of interest varies significantly under the posterior, accurate estimates demand many samples -- a cost often prohibitive for partial differential equation-constrained problems. To address this challenge, we introduce conditional neural control variates, a modular method that learns amortized control variates from joint model-data samples to reduce the variance of MC estimators. To scale to high-dimensional problems, we leverage Stein's identity to design an architecture based on an ensemble of hierarchical coupling layers with tractable Jacobian trace computation. Training requires: (i) samples from the joint distribution of unknown parameters and observed data; and (ii) the posterior score function, which can be computed from physics-based likelihood evaluations, neural operator surrogates, or learned generative models such as conditional normalizing flows....

Related Articles

Llms

LLM agents can trigger real actions now. But what actually stops them from executing?

We ran into a simple but important issue while building agents with tool calling: the model can propose actions but nothing actually enfo...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

OkCupid gave 3 million dating-app photos to facial recognition firm, FTC says

submitted by /u/Mathemodel [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
Llms

Are LLMs a Dead End? (Investors Just Bet $1 Billion on “Yes”)

| AI Reality Check | Cal Newport Chapters 0:00 What is Yan LeCun Up To? 14:55 How is it possible that LeCun could be right about LLM’s be...

Reddit - Artificial Intelligence · 1 min ·
20+ Best AI Project Ideas for 2026: Trending AI Projects
Ai Startups

20+ Best AI Project Ideas for 2026: Trending AI Projects

This article presents over 20 AI project ideas tailored for various skill levels, providing a roadmap for building portfolio-ready projec...

AI Events ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime