[2602.15472] Fluids You Can Trust: Property-Preserving Operator Learning for Incompressible Flows

[2602.15472] Fluids You Can Trust: Property-Preserving Operator Learning for Incompressible Flows

arXiv - Machine Learning 3 min read Article

Summary

This article introduces a novel operator learning method for incompressible flows, enhancing computational efficiency while preserving essential physical properties like incompressibility and turbulence.

Why It Matters

The research addresses significant challenges in fluid dynamics by offering a more efficient alternative to traditional numerical solvers. This method not only reduces computational costs but also ensures that critical physical properties are maintained, which is crucial for accurate simulations in various engineering applications.

Key Takeaways

  • Presents a property-preserving kernel-based operator learning method for incompressible flows.
  • Achieves up to six orders of magnitude lower relative errors compared to traditional neural operators.
  • Trains significantly faster, up to five orders of magnitude, while ensuring physical properties are preserved.

Physics > Fluid Dynamics arXiv:2602.15472 (physics) [Submitted on 17 Feb 2026] Title:Fluids You Can Trust: Property-Preserving Operator Learning for Incompressible Flows Authors:Ramansh Sharma, Matthew Lowery, Houman Owhadi, Varun Shankar View a PDF of the paper titled Fluids You Can Trust: Property-Preserving Operator Learning for Incompressible Flows, by Ramansh Sharma and 3 other authors View PDF Abstract:We present a novel property-preserving kernel-based operator learning method for incompressible flows governed by the incompressible Navier-Stokes equations. Traditional numerical solvers incur significant computational costs to respect incompressibility. Operator learning offers efficient surrogate models, but current neural operators fail to exactly enforce physical properties such as incompressibility, periodicity, and turbulence. Our method maps input functions to expansion coefficients of output functions in a property-preserving kernel basis, ensuring that predicted velocity fields analytically and simultaneously preserve the aforementioned physical properties. We evaluate the method on challenging 2D and 3D, laminar and turbulent, incompressible flow problems. Our method achieves up to six orders of magnitude lower relative $\ell_2$ errors upon generalization and trains up to five orders of magnitude faster compared to neural operators. Moreover, while our method enforces incompressibility analytically, neural operators exhibit very large deviations. Our results...

Related Articles

UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Improving AI models’ ability to explain their predictions
Machine Learning

Improving AI models’ ability to explain their predictions

AI News - General · 9 min ·
Generalist AI unveils GEN-1 model, claiming breakthrough in real-world robotic task performance
Machine Learning

Generalist AI unveils GEN-1 model, claiming breakthrough in real-world robotic task performance

Generalist AI has launched GEN-1, a robotics model achieving 99% success rates and faster task performance, advancing the development of ...

AI News - General · 6 min ·
New AI model sparks alarm as governments brace for AI-driven cyberattacks
Machine Learning

New AI model sparks alarm as governments brace for AI-driven cyberattacks

AI Tools & Products · 6 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime