[2510.16071] MNO: Multiscale Neural Operator for 3D Computational Fluid Dynamics
Summary
The paper presents the Multiscale Neural Operator (MNO), a novel architecture designed for 3D computational fluid dynamics, enhancing accuracy and scalability in solving PDEs on irregular domains.
Why It Matters
This research addresses the limitations of existing neural operators in computational fluid dynamics, particularly in handling complex, multiscale fluid flows. By introducing MNO, the authors provide a framework that significantly improves prediction accuracy, which is crucial for applications in engineering and scientific simulations.
Key Takeaways
- MNO improves accuracy in solving PDEs for fluid dynamics.
- The architecture incorporates three scales of attention for better performance.
- MNO reduces prediction errors by 5% to 50% compared to existing methods.
- The framework is efficient for 3D unstructured point clouds.
- Explicit multiscale design is essential for advancing neural operators.
Computer Science > Machine Learning arXiv:2510.16071 (cs) [Submitted on 17 Oct 2025 (v1), last revised 25 Feb 2026 (this version, v2)] Title:MNO: Multiscale Neural Operator for 3D Computational Fluid Dynamics Authors:Qinxuan Wang, Chuang Wang, Mingyu Zhang, Jingwei Sun, Peipei Yang, Shuo Tang, Shiming Xiang View a PDF of the paper titled MNO: Multiscale Neural Operator for 3D Computational Fluid Dynamics, by Qinxuan Wang and 5 other authors View PDF HTML (experimental) Abstract:Neural operators have emerged as a powerful data-driven paradigm for solving partial differential equations (PDEs), while their accuracy and scalability are still limited, particularly on irregular domains where fluid flows exhibit rich multiscale structures. In this work, we introduce the Multiscale Neural Operator (MNO), a new architecture for computational fluid dynamics (CFD) on 3D unstructured point clouds. MNO explicitly decomposes information across three scales: a global dimension-shrinkage attention module for long-range dependencies, a local graph attention module for neighborhood-level interactions, and a micro point-wise attention module for fine-grained details. This design preserves multiscale inductive biases while remaining computationally efficient. We evaluate MNO on diverse benchmarks, covering steady-state and unsteady flow scenarios with up to 300k points. Across all tasks, MNO consistently outperforms state-of-the-art baselines, reducing prediction errors by 5% to 50%. The resu...