[2602.14853] BEACONS: Bounded-Error, Algebraically-Composable Neural Solvers for Partial Differential Equations
Summary
The paper presents BEACONS, a framework for creating bounded-error neural solvers for partial differential equations (PDEs), enhancing reliability in extrapolatory regimes beyond training data.
Why It Matters
This research addresses the limitations of traditional neural networks in solving PDEs, particularly in computational physics. By ensuring rigorous convergence and stability, BEACONS offers a significant advancement in the reliability of neural solvers, which is crucial for applications requiring accurate predictions in untested domains.
Key Takeaways
- BEACONS framework allows for reliable extrapolation of PDE solutions beyond training data.
- It combines shallow neural networks into deep architectures to suppress approximation errors.
- The framework includes an automatic code generator and a theorem-proving system for correctness certification.
- Applications demonstrated include linear and non-linear PDEs, showcasing its versatility.
- BEACONS offers advantages over classical physics-informed neural networks (PINNs).
Computer Science > Machine Learning arXiv:2602.14853 (cs) [Submitted on 16 Feb 2026] Title:BEACONS: Bounded-Error, Algebraically-Composable Neural Solvers for Partial Differential Equations Authors:Jonathan Gorard, Ammar Hakim, James Juno View a PDF of the paper titled BEACONS: Bounded-Error, Algebraically-Composable Neural Solvers for Partial Differential Equations, by Jonathan Gorard and 2 other authors View PDF HTML (experimental) Abstract:The traditional limitations of neural networks in reliably generalizing beyond the convex hulls of their training data present a significant problem for computational physics, in which one often wishes to solve PDEs in regimes far beyond anything which can be experimentally or analytically validated. In this paper, we show how it is possible to circumvent these limitations by constructing formally-verified neural network solvers for PDEs, with rigorous convergence, stability, and conservation properties, whose correctness can therefore be guaranteed even in extrapolatory regimes. By using the method of characteristics to predict the analytical properties of PDE solutions a priori (even in regions arbitrarily far from the training domain), we show how it is possible to construct rigorous extrapolatory bounds on the worst-case L^inf errors of shallow neural network approximations. Then, by decomposing PDE solutions into compositions of simpler functions, we show how it is possible to compose these shallow neural networks together to for...