[2602.17776] Solving and learning advective multiscale Darcian dynamics with the Neural Basis Method
Summary
The paper presents the Neural Basis Method, a new approach for solving and learning advective multiscale Darcian dynamics, enhancing stability and interpretability in physics-informed machine learning models.
Why It Matters
This research addresses challenges in physics-informed machine learning by improving the interpretability and stability of solutions in complex dynamical systems. The proposed method could significantly enhance predictive modeling in various scientific fields, making it relevant for researchers and practitioners in numerical analysis and machine learning.
Key Takeaways
- Introduces the Neural Basis Method for improved solution stability.
- Enhances interpretability of physics-informed models by separating approximation and enforcement errors.
- Demonstrates effective parametric inference in complex dynamical systems.
- Offers a robust framework for coupling physics with machine learning.
- Potential applications in various fields requiring accurate predictive modeling.
Mathematics > Numerical Analysis arXiv:2602.17776 (math) [Submitted on 19 Feb 2026] Title:Solving and learning advective multiscale Darcian dynamics with the Neural Basis Method Authors:Yuhe Wang, Min Wang View a PDF of the paper titled Solving and learning advective multiscale Darcian dynamics with the Neural Basis Method, by Yuhe Wang and Min Wang View PDF HTML (experimental) Abstract:Physics-governed models are increasingly paired with machine learning for accelerated predictions, yet most "physics--informed" formulations treat the governing equations as a penalty loss whose scale and meaning are set by heuristic balancing. This blurs operator structure, thereby confounding solution approximation error with governing-equation enforcement error and making the solving and learning progress hard to interpret and control. Here we introduce the Neural Basis Method, a projection-based formulation that couples a predefined, physics-conforming neural basis space with an operator-induced residual metric to obtain a well-conditioned deterministic minimization. Stability and reliability then hinge on this metric: the residual is not merely an optimization objective but a computable certificate tied to approximation and enforcement, remaining stable under basis enrichment and yielding reduced coordinates that are learnable across parametric instances. We use advective multiscale Darcian dynamics as a concrete demonstration of this broader point. Our method produce accurate and robu...