[2602.17089] Synergizing Transport-Based Generative Models and Latent Geometry for Stochastic Closure Modeling
Summary
This article presents a novel approach to stochastic closure modeling by integrating transport-based generative models with latent geometry, significantly improving sampling speed and physical fidelity.
Why It Matters
The research addresses a critical limitation in generative AI, particularly in diffusion models, by enhancing sampling efficiency. This advancement could lead to broader applications in machine learning and dynamical systems, making it easier to model complex phenomena with less data.
Key Takeaways
- Transport-based generative models can achieve faster sampling for stochastic closure models.
- Flow matching in lower-dimensional latent spaces enhances efficiency, enabling single-step sampling.
- Implicit and explicit regularization methods maintain physical fidelity and topological information.
Computer Science > Machine Learning arXiv:2602.17089 (cs) [Submitted on 19 Feb 2026] Title:Synergizing Transport-Based Generative Models and Latent Geometry for Stochastic Closure Modeling Authors:Xinghao Dong, Huchen Yang, Jin-long Wu View a PDF of the paper titled Synergizing Transport-Based Generative Models and Latent Geometry for Stochastic Closure Modeling, by Xinghao Dong and 2 other authors View PDF HTML (experimental) Abstract:Diffusion models recently developed for generative AI tasks can produce high-quality samples while still maintaining diversity among samples to promote mode coverage, providing a promising path for learning stochastic closure models. Compared to other types of generative AI models, such as GANs and VAEs, the sampling speed is known as a key disadvantage of diffusion models. By systematically comparing transport-based generative models on a numerical example of 2D Kolmogorov flows, we show that flow matching in a lower-dimensional latent space is suited for fast sampling of stochastic closure models, enabling single-step sampling that is up to two orders of magnitude faster than iterative diffusion-based approaches. To control the latent space distortion and thus ensure the physical fidelity of the sampled closure term, we compare the implicit regularization offered by a joint training scheme against two explicit regularizers: metric-preserving (MP) and geometry-aware (GA) constraints. Besides offering a faster sampling speed, both explicitly...