[2602.18015] Flow Actor-Critic for Offline Reinforcement Learning
Summary
The paper introduces Flow Actor-Critic, a novel method for offline reinforcement learning that utilizes flow policies to manage complex, multi-modal dataset distributions, achieving state-of-the-art performance on benchmark tests.
Why It Matters
As offline reinforcement learning becomes increasingly important in AI applications, the ability to effectively model complex data distributions is crucial. This research presents a significant advancement in the field, potentially improving the performance of RL systems in real-world scenarios where data is limited or difficult to obtain.
Key Takeaways
- Flow Actor-Critic addresses the limitations of traditional Gaussian policies in offline RL.
- The method combines flow models for both actor and critic, enhancing performance.
- Achieves new state-of-the-art results on D4RL and OGBench benchmarks.
- Introduces a novel critic regularizer to prevent Q-value explosion.
- Highlights the importance of expressive policies in handling complex data distributions.
Computer Science > Machine Learning arXiv:2602.18015 (cs) [Submitted on 20 Feb 2026] Title:Flow Actor-Critic for Offline Reinforcement Learning Authors:Jongseong Chae, Jongeui Park, Yongjae Shin, Gyeongmin Kim, Seungyul Han, Youngchul Sung View a PDF of the paper titled Flow Actor-Critic for Offline Reinforcement Learning, by Jongseong Chae and 5 other authors View PDF HTML (experimental) Abstract:The dataset distributions in offline reinforcement learning (RL) often exhibit complex and multi-modal distributions, necessitating expressive policies to capture such distributions beyond widely-used Gaussian policies. To handle such complex and multi-modal datasets, in this paper, we propose Flow Actor-Critic, a new actor-critic method for offline RL, based on recent flow policies. The proposed method not only uses the flow model for actor as in previous flow policies but also exploits the expressive flow model for conservative critic acquisition to prevent Q-value explosion in out-of-data regions. To this end, we propose a new form of critic regularizer based on the flow behavior proxy model obtained as a byproduct of flow-based actor design. Leveraging the flow model in this joint way, we achieve new state-of-the-art performance for test datasets of offline RL including the D4RL and recent OGBench benchmarks. Comments: Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI) Cite as: arXiv:2602.18015 [cs.LG] (or arXiv:2602.18015v1 [cs.LG] for this version) http...