[2506.21427] Flow-Based Single-Step Completion for Efficient and Expressive Policy Learning
Summary
The paper presents the Single-Step Completion Policy (SSCP), a novel approach in reinforcement learning that enhances efficiency and expressiveness by enabling one-shot action generation through augmented flow-matching objectives.
Why It Matters
This research addresses the challenges of high inference costs and training instability in generative models for offline reinforcement learning. By introducing SSCP, it offers a more efficient and adaptable framework that can significantly improve performance in various RL settings, making it relevant for researchers and practitioners in machine learning and robotics.
Key Takeaways
- SSCP enables direct completion vector predictions, improving action generation efficiency.
- The method combines the expressiveness of generative models with the efficiency of unimodal policies.
- SSCP scales effectively across offline, offline-to-online, and online RL settings.
- It extends to goal-conditioned RL, allowing for better exploitation of subgoal structures.
- Strong performance across standard benchmarks positions SSCP as a versatile tool for deep RL.
Computer Science > Machine Learning arXiv:2506.21427 (cs) [Submitted on 26 Jun 2025 (v1), last revised 24 Feb 2026 (this version, v3)] Title:Flow-Based Single-Step Completion for Efficient and Expressive Policy Learning Authors:Prajwal Koirala, Cody Fleming View a PDF of the paper titled Flow-Based Single-Step Completion for Efficient and Expressive Policy Learning, by Prajwal Koirala and 1 other authors View PDF HTML (experimental) Abstract:Generative models such as diffusion and flow-matching offer expressive policies for offline reinforcement learning (RL) by capturing rich, multimodal action distributions, but their iterative sampling introduces high inference costs and training instability due to gradient propagation across sampling steps. We propose the Single-Step Completion Policy (SSCP), a generative policy trained with an augmented flow-matching objective to predict direct completion vectors from intermediate flow samples, enabling accurate, one-shot action generation. In an off-policy actor-critic framework, SSCP combines the expressiveness of generative models with the training and inference efficiency of unimodal policies, without requiring long backpropagation chains. Our method scales effectively to offline, offline-to-online, and online RL settings, offering substantial gains in speed and adaptability over diffusion-based baselines. We further extend SSCP to goal-conditioned RL, enabling flat policies to exploit subgoal structures without explicit hierarchi...