[2602.23113] Learning Physical Operators using Neural Operators
Summary
This paper presents a novel physics-informed training framework for neural operators that enhances their ability to generalize beyond training distributions in solving partial differential equations (PDEs).
Why It Matters
The research addresses significant limitations in current neural operator models, particularly their inability to generalize and their reliance on fixed temporal discretization. By introducing a modular architecture that decomposes PDEs, this work could lead to more robust applications in physics simulations, potentially transforming how complex physical systems are modeled and predicted.
Key Takeaways
- Introduces a modular mixture-of-experts architecture for neural operators.
- Enhances generalization to novel physical regimes through operator splitting methods.
- Enables continuous-in-time predictions using neural ordinary differential equations.
- Demonstrates superior performance on Navier-Stokes equations compared to existing methods.
- Maintains parameter efficiency while providing interpretable components.
Computer Science > Machine Learning arXiv:2602.23113 (cs) [Submitted on 26 Feb 2026] Title:Learning Physical Operators using Neural Operators Authors:Vignesh Gopakumar, Ander Gray, Dan Giles, Lorenzo Zanisi, Matt J. Kusner, Timo Betcke, Stanislas Pamela, Marc Peter Deisenroth View a PDF of the paper titled Learning Physical Operators using Neural Operators, by Vignesh Gopakumar and 7 other authors View PDF Abstract:Neural operators have emerged as promising surrogate models for solving partial differential equations (PDEs), but struggle to generalise beyond training distributions and are often constrained to a fixed temporal discretisation. This work introduces a physics-informed training framework that addresses these limitations by decomposing PDEs using operator splitting methods, training separate neural operators to learn individual non-linear physical operators while approximating linear operators with fixed finite-difference convolutions. This modular mixture-of-experts architecture enables generalisation to novel physical regimes by explicitly encoding the underlying operator structure. We formulate the modelling task as a neural ordinary differential equation (ODE) where these learned operators constitute the right-hand side, enabling continuous-in-time predictions through standard ODE solvers and implicitly enforcing PDE constraints. Demonstrated on incompressible and compressible Navier-Stokes equations, our approach achieves better convergence and superior perf...