[2510.14974] pi-Flow: Policy-Based Few-Step Generation via Imitation Distillation
Summary
The paper presents pi-Flow, a novel approach to few-step generation in machine learning that utilizes imitation distillation to enhance model training efficiency and output quality.
Why It Matters
This research addresses the quality-diversity trade-off in generative models, offering a more efficient training method that can improve performance in tasks like image generation. By introducing a policy-based flow model, it contributes to advancements in machine learning, particularly in generative AI applications.
Key Takeaways
- pi-Flow modifies student flow models to predict policies for dynamic flow velocities.
- The novel imitation distillation approach enhances training stability and scalability.
- Outperforms previous models in both quality and diversity on benchmark datasets.
Computer Science > Machine Learning arXiv:2510.14974 (cs) [Submitted on 16 Oct 2025 (v1), last revised 19 Feb 2026 (this version, v3)] Title:pi-Flow: Policy-Based Few-Step Generation via Imitation Distillation Authors:Hansheng Chen, Kai Zhang, Hao Tan, Leonidas Guibas, Gordon Wetzstein, Sai Bi View a PDF of the paper titled pi-Flow: Policy-Based Few-Step Generation via Imitation Distillation, by Hansheng Chen and 5 other authors View PDF HTML (experimental) Abstract:Few-step diffusion or flow-based generative models typically distill a velocity-predicting teacher into a student that predicts a shortcut towards denoised data. This format mismatch has led to complex distillation procedures that often suffer from a quality-diversity trade-off. To address this, we propose policy-based flow models ($\pi$-Flow). $\pi$-Flow modifies the output layer of a student flow model to predict a network-free policy at one timestep. The policy then produces dynamic flow velocities at future substeps with negligible overhead, enabling fast and accurate ODE integration on these substeps without extra network evaluations. To match the policy's ODE trajectory to the teacher's, we introduce a novel imitation distillation approach, which matches the policy's velocity to the teacher's along the policy's trajectory using a standard $\ell_2$ flow matching loss. By simply mimicking the teacher's behavior, $\pi$-Flow enables stable and scalable training and avoids the quality-diversity trade-off. On I...