[2602.16363] Improved Bounds for Reward-Agnostic and Reward-Free Exploration
Summary
This paper presents improved algorithms for reward-free and reward-agnostic exploration in Markov decision processes, enhancing the ability to achieve optimal policies without prior reward knowledge.
Why It Matters
The findings are significant for advancing exploration strategies in reinforcement learning, particularly in scenarios where rewards are unknown or not provided. This research could lead to more efficient learning algorithms applicable in various AI domains, including robotics and autonomous systems.
Key Takeaways
- Introduces a new algorithm that relaxes the accuracy parameter for reward-agnostic exploration.
- Establishes a tight lower bound for reward-free exploration, bridging the gap with known upper bounds.
- Enhances the understanding of exploration strategies in episodic finite-horizon MDPs.
Computer Science > Machine Learning arXiv:2602.16363 (cs) [Submitted on 18 Feb 2026] Title:Improved Bounds for Reward-Agnostic and Reward-Free Exploration Authors:Oran Ridel, Alon Cohen View a PDF of the paper titled Improved Bounds for Reward-Agnostic and Reward-Free Exploration, by Oran Ridel and Alon Cohen View PDF HTML (experimental) Abstract:We study reward-free and reward-agnostic exploration in episodic finite-horizon Markov decision processes (MDPs), where an agent explores an unknown environment without observing external rewards. Reward-free exploration aims to enable $\epsilon$-optimal policies for any reward revealed after exploration, while reward-agnostic exploration targets $\epsilon$-optimality for rewards drawn from a small finite class. In the reward-agnostic setting, Li, Yan, Chen, and Fan achieve minimax sample complexity, but only for restrictively small accuracy parameter $\epsilon$. We propose a new algorithm that significantly relaxes the requirement on $\epsilon$. Our approach is novel and of technical interest by itself. Our algorithm employs an online learning procedure with carefully designed rewards to construct an exploration policy, which is used to gather data sufficient for accurate dynamics estimation and subsequent computation of an $\epsilon$-optimal policy once the reward is revealed. Finally, we establish a tight lower bound for reward-free exploration, closing the gap between known upper and lower bounds. Subjects: Machine Learning (c...