[2508.13904] One-Step Flow Q-Learning: Addressing the Diffusion Policy Bottleneck in Offline Reinforcement Learning
Summary
The paper introduces One-Step Flow Q-Learning (OFQL), a novel framework that improves offline reinforcement learning by enabling one-step action generation, enhancing efficiency and performance compared to existing methods.
Why It Matters
This research addresses significant limitations in offline reinforcement learning, particularly the inefficiencies of multi-step denoising in existing diffusion policies. By proposing OFQL, the authors provide a more efficient alternative that could advance the field, making it relevant for researchers and practitioners looking to optimize reinforcement learning algorithms.
Key Takeaways
- OFQL enables effective one-step action generation without auxiliary modules.
- The framework significantly reduces computation time during training and inference.
- OFQL outperforms traditional multi-step DQL methods, achieving state-of-the-art results.
- The approach reformulates diffusion policies within the Flow Matching paradigm.
- Extensive experiments validate OFQL's robustness and efficiency on the D4RL benchmark.
Computer Science > Machine Learning arXiv:2508.13904 (cs) [Submitted on 19 Aug 2025 (v1), last revised 24 Feb 2026 (this version, v3)] Title:One-Step Flow Q-Learning: Addressing the Diffusion Policy Bottleneck in Offline Reinforcement Learning Authors:Thanh Nguyen, Chang D. Yoo View a PDF of the paper titled One-Step Flow Q-Learning: Addressing the Diffusion Policy Bottleneck in Offline Reinforcement Learning, by Thanh Nguyen and 1 other authors View PDF HTML (experimental) Abstract:Diffusion Q-Learning (DQL) has established diffusion policies as a high-performing paradigm for offline reinforcement learning, but its reliance on multi-step denoising for action generation renders both training and inference slow and fragile. Existing efforts to accelerate DQL toward one-step denoising typically rely on auxiliary modules or policy distillation, sacrificing either simplicity or performance. It remains unclear whether a one-step policy can be trained directly without such trade-offs. To this end, we introduce One-Step Flow Q-Learning (OFQL), a novel framework that enables effective one-step action generation during both training and inference, without auxiliary modules or distillation. OFQL reformulates the DQL policy within the Flow Matching (FM) paradigm but departs from conventional FM by learning an average velocity field that directly supports accurate one-step action generation. This design removes the need for multi-step denoising and backpropagation-through-time updates...