[2510.14974] pi-Flow: Policy-Based Few-Step Generation via Imitation Distillation

[2510.14974] pi-Flow: Policy-Based Few-Step Generation via Imitation Distillation

arXiv - AI 4 min read Article

Summary

The paper presents pi-Flow, a novel approach to few-step generation in machine learning that utilizes imitation distillation to enhance model training efficiency and output quality.

Why It Matters

This research addresses the quality-diversity trade-off in generative models, offering a more efficient training method that can improve performance in tasks like image generation. By introducing a policy-based flow model, it contributes to advancements in machine learning, particularly in generative AI applications.

Key Takeaways

  • pi-Flow modifies student flow models to predict policies for dynamic flow velocities.
  • The novel imitation distillation approach enhances training stability and scalability.
  • Outperforms previous models in both quality and diversity on benchmark datasets.

Computer Science > Machine Learning arXiv:2510.14974 (cs) [Submitted on 16 Oct 2025 (v1), last revised 19 Feb 2026 (this version, v3)] Title:pi-Flow: Policy-Based Few-Step Generation via Imitation Distillation Authors:Hansheng Chen, Kai Zhang, Hao Tan, Leonidas Guibas, Gordon Wetzstein, Sai Bi View a PDF of the paper titled pi-Flow: Policy-Based Few-Step Generation via Imitation Distillation, by Hansheng Chen and 5 other authors View PDF HTML (experimental) Abstract:Few-step diffusion or flow-based generative models typically distill a velocity-predicting teacher into a student that predicts a shortcut towards denoised data. This format mismatch has led to complex distillation procedures that often suffer from a quality-diversity trade-off. To address this, we propose policy-based flow models ($\pi$-Flow). $\pi$-Flow modifies the output layer of a student flow model to predict a network-free policy at one timestep. The policy then produces dynamic flow velocities at future substeps with negligible overhead, enabling fast and accurate ODE integration on these substeps without extra network evaluations. To match the policy's ODE trajectory to the teacher's, we introduce a novel imitation distillation approach, which matches the policy's velocity to the teacher's along the policy's trajectory using a standard $\ell_2$ flow matching loss. By simply mimicking the teacher's behavior, $\pi$-Flow enables stable and scalable training and avoids the quality-diversity trade-off. On I...

Related Articles

Llms

[P] Dante-2B: I'm training a 2.1B bilingual fully open Italian/English LLM from scratch on 2×H200. Phase 1 done — here's what I've built.

The problem If you work with Italian text and local models, you know the pain. Every open-source LLM out there treats Italian as an after...

Reddit - Machine Learning · 1 min ·
Machine Learning

[R] Architecture Determines Optimization: Deriving Weight Updates from Network Topology (seeking arXiv endorsement - cs.LG)

Abstract: We derive neural network weight updates from first principles without assuming gradient descent or a specific loss function. St...

Reddit - Machine Learning · 1 min ·
Machine Learning

[P] ML project (XGBoost + Databricks + MLflow) — how to talk about “production issues” in interviews?

Hey all, I recently built an end-to-end fraud detection project using a large banking dataset: Trained an XGBoost model Used Databricks f...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] The memory chip market lost tens of billions over a paper this community would have understood in 10 minutes

TurboQuant was teased recently and tens of billions gone from memory chip market in 48 hours but anyone in this community who read the pa...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime