[2602.07744] Riemannian MeanFlow
Summary
The paper introduces Riemannian MeanFlow (RMF), a novel framework for generative modeling on Riemannian manifolds, significantly reducing computational demands while maintaining high-quality outputs in applications like protein design.
Why It Matters
As generative modeling becomes increasingly important in fields like bioinformatics, RMF addresses critical efficiency challenges by minimizing the number of neural network evaluations required. This advancement can streamline workflows in scientific research, making complex modeling more accessible and less resource-intensive.
Key Takeaways
- RMF allows for high-quality generative modeling with fewer neural network evaluations.
- The framework achieves comparable results to existing methods in protein and DNA design.
- Three characterizations of manifold average velocity are derived, enhancing understanding of flow dynamics.
- RMF supports efficient reward-guided design, predicting terminal states with minimal additional cost.
- This approach can significantly reduce computational bottlenecks in large-scale scientific sampling.
Computer Science > Machine Learning arXiv:2602.07744 (cs) [Submitted on 8 Feb 2026 (v1), last revised 13 Feb 2026 (this version, v2)] Title:Riemannian MeanFlow Authors:Dongyeop Woo, Marta Skreta, Seonghyun Park, Kirill Neklyudov, Sungsoo Ahn View a PDF of the paper titled Riemannian MeanFlow, by Dongyeop Woo and 4 other authors View PDF HTML (experimental) Abstract:Diffusion and flow models have become the dominant paradigm for generative modeling on Riemannian manifolds, with successful applications in protein backbone generation and DNA sequence design. However, these methods require tens to hundreds of neural network evaluations at inference time, which can become a computational bottleneck in large-scale scientific sampling workflows. We introduce Riemannian MeanFlow~(RMF), a framework for learning flow maps directly on manifolds, enabling high-quality generations with as few as one forward pass. We derive three equivalent characterizations of the manifold average velocity (Eulerian, Lagrangian, and semigroup identities), and analyze parameterizations and stabilization techniques to improve training on high-dimensional manifolds. In promoter DNA design and protein backbone generation settings, RMF achieves comparable sample quality to prior methods while requiring up to 10$\times$ fewer function evaluations. Finally, we show that few-step flow maps enable efficient reward-guided design through reward look-ahead, where terminal states can be predicted from intermediate ...