[2602.22059] NESTOR: A Nested MOE-based Neural Operator for Large-Scale PDE Pre-Training
Summary
The paper introduces NESTOR, a nested Mixture-of-Experts (MoE) neural operator designed for efficient large-scale pre-training of PDEs, enhancing computational efficiency and model transferability.
Why It Matters
This research addresses the limitations of traditional numerical methods in solving partial differential equations (PDEs) by leveraging advanced neural network architectures. The proposed approach can significantly improve the efficiency and effectiveness of PDE modeling, which is crucial for various scientific and engineering applications.
Key Takeaways
- NESTOR utilizes a nested MoE framework to enhance neural operator capabilities.
- The model effectively captures both global and local dependencies in PDEs.
- Large-scale pre-training on diverse datasets demonstrates improved generalization.
- The approach allows selective activation of expert networks for better performance.
- Results indicate strong transferability to downstream tasks.
Computer Science > Computer Vision and Pattern Recognition arXiv:2602.22059 (cs) [Submitted on 25 Feb 2026] Title:NESTOR: A Nested MOE-based Neural Operator for Large-Scale PDE Pre-Training Authors:Dengdi Sun, Xiaoya Zhou, Xiao Wang, Hao Si, Wanli Lyu, Jin Tang, Bin Luo View a PDF of the paper titled NESTOR: A Nested MOE-based Neural Operator for Large-Scale PDE Pre-Training, by Dengdi Sun and 6 other authors View PDF HTML (experimental) Abstract:Neural operators have emerged as an efficient paradigm for solving PDEs, overcoming the limitations of traditional numerical methods and significantly improving computational efficiency. However, due to the diversity and complexity of PDE systems, existing neural operators typically rely on a single network architecture, which limits their capacity to fully capture heterogeneous features and complex system dependencies. This constraint poses a bottleneck for large-scale PDE pre-training based on neural operators. To address these challenges, we propose a large-scale PDE pre-trained neural operator based on a nested Mixture-of-Experts (MoE) framework. In particular, the image-level MoE is designed to capture global dependencies, while the token-level Sub-MoE focuses on local dependencies. Our model can selectively activate the most suitable expert networks for a given input, thereby enhancing generalization and transferability. We conduct large-scale pre-training on twelve PDE datasets from diverse sources and successfully transfer...