[2510.16071] MNO: Multiscale Neural Operator for 3D Computational Fluid Dynamics

[2510.16071] MNO: Multiscale Neural Operator for 3D Computational Fluid Dynamics

arXiv - Machine Learning 4 min read Article

Summary

The paper presents the Multiscale Neural Operator (MNO), a novel architecture designed for 3D computational fluid dynamics, enhancing accuracy and scalability in solving PDEs on irregular domains.

Why It Matters

This research addresses the limitations of existing neural operators in computational fluid dynamics, particularly in handling complex, multiscale fluid flows. By introducing MNO, the authors provide a framework that significantly improves prediction accuracy, which is crucial for applications in engineering and scientific simulations.

Key Takeaways

  • MNO improves accuracy in solving PDEs for fluid dynamics.
  • The architecture incorporates three scales of attention for better performance.
  • MNO reduces prediction errors by 5% to 50% compared to existing methods.
  • The framework is efficient for 3D unstructured point clouds.
  • Explicit multiscale design is essential for advancing neural operators.

Computer Science > Machine Learning arXiv:2510.16071 (cs) [Submitted on 17 Oct 2025 (v1), last revised 25 Feb 2026 (this version, v2)] Title:MNO: Multiscale Neural Operator for 3D Computational Fluid Dynamics Authors:Qinxuan Wang, Chuang Wang, Mingyu Zhang, Jingwei Sun, Peipei Yang, Shuo Tang, Shiming Xiang View a PDF of the paper titled MNO: Multiscale Neural Operator for 3D Computational Fluid Dynamics, by Qinxuan Wang and 5 other authors View PDF HTML (experimental) Abstract:Neural operators have emerged as a powerful data-driven paradigm for solving partial differential equations (PDEs), while their accuracy and scalability are still limited, particularly on irregular domains where fluid flows exhibit rich multiscale structures. In this work, we introduce the Multiscale Neural Operator (MNO), a new architecture for computational fluid dynamics (CFD) on 3D unstructured point clouds. MNO explicitly decomposes information across three scales: a global dimension-shrinkage attention module for long-range dependencies, a local graph attention module for neighborhood-level interactions, and a micro point-wise attention module for fine-grained details. This design preserves multiscale inductive biases while remaining computationally efficient. We evaluate MNO on diverse benchmarks, covering steady-state and unsteady flow scenarios with up to 300k points. Across all tasks, MNO consistently outperforms state-of-the-art baselines, reducing prediction errors by 5% to 50%. The resu...

Related Articles

Machine Learning

[D] ICML 2026 Average Score

Hi all, I’m curious about the current review dynamics for ICML 2026, especially after the rebuttal phase. For those who are reviewers (or...

Reddit - Machine Learning · 1 min ·
Machine Learning

[R] VOID: Video Object and Interaction Deletion (physically-consistent video inpainting)

We present VOID, a model for video object removal that aims to handle *physical interactions*, not just appearance. Most existing video i...

Reddit - Machine Learning · 1 min ·
Machine Learning

FLUX 2 Pro (2026) Sketch to Image

I sketched a cow and tested how different models interpret it into a realistic image for downstream 3D generation, turns out some models ...

Reddit - Artificial Intelligence · 1 min ·
Improving AI models’ ability to explain their predictions
Machine Learning

Improving AI models’ ability to explain their predictions

AI News - General · 9 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime