[2602.23089] Physics-informed neural particle flow for the Bayesian update step
Summary
This paper introduces a physics-informed neural particle flow method for the Bayesian update step, addressing computational challenges in high-dimensional nonlinear estimation.
Why It Matters
The proposed method enhances the efficiency and accuracy of Bayesian updates in complex models, which is crucial for advancements in machine learning applications. By integrating physical constraints into neural networks, it provides a novel approach to improve computational performance and robustness in high-dimensional scenarios.
Key Takeaways
- Introduces a physics-informed neural particle flow for Bayesian updates.
- Addresses computational challenges in high-dimensional nonlinear estimation.
- Utilizes a governing partial differential equation to enhance training.
- Demonstrates improved mode coverage and robustness in experimental validation.
- Eliminates the need for ground-truth posterior samples through unsupervised training.
Computer Science > Machine Learning arXiv:2602.23089 (cs) [Submitted on 26 Feb 2026] Title:Physics-informed neural particle flow for the Bayesian update step Authors:Domonkos Csuzdi, Tamás Bécsi, Olivér Törő View a PDF of the paper titled Physics-informed neural particle flow for the Bayesian update step, by Domonkos Csuzdi and 2 other authors View PDF HTML (experimental) Abstract:The Bayesian update step poses significant computational challenges in high-dimensional nonlinear estimation. While log-homotopy particle flow filters offer an alternative to stochastic sampling, existing formulations usually yield stiff differential equations. Conversely, existing deep learning approximations typically treat the update as a black-box task or rely on asymptotic relaxation, neglecting the exact geometric structure of the finite-horizon probability transport. In this work, we propose a physics-informed neural particle flow, which is an amortized inference framework. To construct the flow, we couple the log-homotopy trajectory of the prior to posterior density function with the continuity equation describing the density evolution. This derivation yields a governing partial differential equation (PDE), referred to as the master PDE. By embedding this PDE as a physical constraint into the loss function, we train a neural network to approximate the transport velocity field. This approach enables purely unsupervised training, eliminating the need for ground-truth posterior samples. We d...