[2512.04954] Amortized Inference of Multi-Modal Posteriors using Likelihood-Weighted Normalizing Flows
Summary
This paper introduces a novel technique for amortized posterior estimation using Normalizing Flows, enhancing inference in high-dimensional inverse problems without requiring posterior training samples.
Why It Matters
The research addresses challenges in efficiently estimating multi-modal posteriors, which is crucial for various applications in machine learning and statistics. By improving reconstruction fidelity through better initialization methods, this work contributes to advancements in inference techniques that can significantly impact fields relying on accurate probabilistic modeling.
Key Takeaways
- Introduces a method for amortized posterior estimation using Normalizing Flows.
- Demonstrates improved inference in high-dimensional inverse problems.
- Highlights the importance of base distribution topology on modeled posteriors.
- Shows that Gaussian Mixture Models enhance reconstruction fidelity.
- Provides empirical validation on multi-modal benchmark tasks.
Computer Science > Machine Learning arXiv:2512.04954 (cs) [Submitted on 4 Dec 2025 (v1), last revised 20 Feb 2026 (this version, v2)] Title:Amortized Inference of Multi-Modal Posteriors using Likelihood-Weighted Normalizing Flows Authors:Rajneil Baruah View a PDF of the paper titled Amortized Inference of Multi-Modal Posteriors using Likelihood-Weighted Normalizing Flows, by Rajneil Baruah View PDF HTML (experimental) Abstract:We present a novel technique for amortized posterior estimation using Normalizing Flows trained with likelihood-weighted importance sampling. This approach allows for the efficient inference of theoretical parameters in high-dimensional inverse problems without the need for posterior training samples. We implement the method on multi-modal benchmark tasks in 2D and 3D to check for the efficacy. A critical observation of our study is the impact of the topology of the base distributions on the modelled posteriors. We find that standard unimodal base distributions fail to capture disconnected support, resulting in spurious probability bridges between modes. We demonstrate that initializing the flow with a Gaussian Mixture Model that matches the cardinality of the target modes significantly improves reconstruction fidelity, as measured by some distance and divergence metrics. Comments: Subjects: Machine Learning (cs.LG); High Energy Physics - Experiment (hep-ex); High Energy Physics - Phenomenology (hep-ph); Computational Physics (physics.comp-ph); Data ...