[2604.04971] A Theory-guided Weighted $L^2$ Loss for solving the BGK model via Physics-informed neural networks
About this article
Abstract page for arXiv paper 2604.04971: A Theory-guided Weighted $L^2$ Loss for solving the BGK model via Physics-informed neural networks
Computer Science > Machine Learning arXiv:2604.04971 (cs) [Submitted on 4 Apr 2026] Title:A Theory-guided Weighted $L^2$ Loss for solving the BGK model via Physics-informed neural networks Authors:Gyounghun Ko, Sung-Jun Son, Seung Yeon Cho, Myeong-Su Lee View a PDF of the paper titled A Theory-guided Weighted $L^2$ Loss for solving the BGK model via Physics-informed neural networks, by Gyounghun Ko and 3 other authors View PDF HTML (experimental) Abstract:While Physics-Informed Neural Networks offer a promising framework for solving partial differential equations, the standard $L^2$ loss formulation is fundamentally insufficient when applied to the Bhatnagar-Gross-Krook (BGK) model. Specifically, simply minimizing the standard loss does not guarantee accurate predictions of the macroscopic moments, causing the approximate solutions to fail in capturing the true physical solution. To overcome this limitation, we introduce a velocity-weighted $L^2$ loss function designed to effectively penalize errors in the high-velocity regions. By establishing a stability estimate for the proposed approach, we shows that minimizing the proposed weighted loss guarantees the convergence of the approximate solution. Also, numerical experiments demonstrate that employing this weighted PINN loss leads to superior accuracy and robustness across various benchmarks compared to the standard approach. Comments: Subjects: Machine Learning (cs.LG); Numerical Analysis (math.NA); Computational Physics ...