[2506.10167] Wasserstein Barycenter Soft Actor-Critic
Summary
The Wasserstein Barycenter Soft Actor-Critic (WBSAC) algorithm enhances sample efficiency in reinforcement learning by combining pessimistic and optimistic exploration strategies.
Why It Matters
This research addresses the critical challenge of sample inefficiency in reinforcement learning, particularly in environments with sparse rewards. By introducing a novel exploration strategy, WBSAC has the potential to improve performance in continuous control tasks, making it relevant for researchers and practitioners in machine learning.
Key Takeaways
- WBSAC improves sample efficiency in reinforcement learning tasks.
- The algorithm utilizes a dual actor approach for exploration.
- It is particularly effective in environments with sparse rewards.
- WBSAC outperforms existing state-of-the-art off-policy algorithms.
- The research contributes to advancing techniques in continuous control domains.
Computer Science > Machine Learning arXiv:2506.10167 (cs) [Submitted on 11 Jun 2025 (v1), last revised 24 Feb 2026 (this version, v4)] Title:Wasserstein Barycenter Soft Actor-Critic Authors:Zahra Shahrooei, Ali Baheri View a PDF of the paper titled Wasserstein Barycenter Soft Actor-Critic, by Zahra Shahrooei and Ali Baheri View PDF HTML (experimental) Abstract:Deep off-policy actor-critic algorithms have emerged as the leading framework for reinforcement learning in continuous control domains. However, most of these algorithms suffer from poor sample efficiency, especially in environments with sparse rewards. In this paper, we take a step towards addressing this issue by providing a principled directed exploration strategy. We propose Wasserstein Barycenter Soft Actor-Critic (WBSAC) algorithm, which benefits from a pessimistic actor for temporal difference learning and an optimistic actor to promote exploration. This is achieved by using the Wasserstein barycenter of the pessimistic and optimistic policies as the exploration policy and adjusting the degree of exploration throughout the learning process. We compare WBSAC with state-of-the-art off-policy actor-critic algorithms and show that WBSAC is more sample-efficient on MuJoCo continuous control tasks. Subjects: Machine Learning (cs.LG); Systems and Control (eess.SY) Cite as: arXiv:2506.10167 [cs.LG] (or arXiv:2506.10167v4 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2506.10167 Focus to learn more arXiv-...