[2602.12375] Value Bonuses using Ensemble Errors for Exploration in Reinforcement Learning
Summary
This paper introduces Value Bonuses with Ensemble Errors (VBE), an innovative algorithm that enhances exploration in reinforcement learning by using ensemble errors to create value bonuses, promoting first-visit optimism and deep exploration.
Why It Matters
Exploration is a critical challenge in reinforcement learning, and traditional methods often fail to encourage agents to explore new states effectively. The VBE algorithm addresses this gap by providing a systematic approach to exploration that can improve performance in complex environments, making it relevant for researchers and practitioners in AI and machine learning.
Key Takeaways
- VBE enhances exploration by using ensemble errors to create value bonuses.
- The algorithm promotes first-visit optimism, encouraging agents to explore new states.
- VBE outperforms existing methods like Bootstrap DQN and reward bonus approaches in classic environments.
- The approach is scalable to complex environments, such as Atari games.
- Understanding VBE can lead to advancements in reinforcement learning strategies.
Computer Science > Machine Learning arXiv:2602.12375 (cs) [Submitted on 12 Feb 2026] Title:Value Bonuses using Ensemble Errors for Exploration in Reinforcement Learning Authors:Abdul Wahab, Raksha Kumaraswamy, Martha White View a PDF of the paper titled Value Bonuses using Ensemble Errors for Exploration in Reinforcement Learning, by Abdul Wahab and 2 other authors View PDF HTML (experimental) Abstract:Optimistic value estimates provide one mechanism for directed exploration in reinforcement learning (RL). The agent acts greedily with respect to an estimate of the value plus what can be seen as a value bonus. The value bonus can be learned by estimating a value function on reward bonuses, propagating local uncertainties around rewards. However, this approach only increases the value bonus for an action retroactively, after seeing a higher reward bonus from that state and action. Such an approach does not encourage the agent to visit a state and action for the first time. In this work, we introduce an algorithm for exploration called Value Bonuses with Ensemble errors (VBE), that maintains an ensemble of random action-value functions (RQFs). VBE uses the errors in the estimation of these RQFs to design value bonuses that provide first-visit optimism and deep exploration. The key idea is to design the rewards for these RQFs in such a way that the value bonus can decrease to zero. We show that VBE outperforms Bootstrap DQN and two reward bonus approaches (RND and ACB) on seve...