[2602.15076] Near-Optimal Sample Complexity for Online Constrained MDPs
Summary
This paper presents a model-based primal-dual algorithm for online constrained Markov Decision Processes (CMDPs), achieving near-optimal sample complexity while ensuring safety in reinforcement learning applications.
Why It Matters
Safety is crucial in reinforcement learning, especially for real-world applications like autonomous driving and healthcare. This research addresses the challenges of enforcing safety constraints while optimizing performance, providing a significant advancement in the field of machine learning.
Key Takeaways
- Proposes a model-based primal-dual algorithm for CMDPs.
- Achieves $ ilde{O}(rac{SAH^3}{ ext{ε}^2})$ episodes for relaxed feasibility with bounded violations.
- Achieves $ ilde{O}(rac{SAH^5}{ ext{ε}^2 ext{ζ}^2})$ episodes for strict feasibility with zero violations.
- Demonstrates that learning CMDPs is as feasible as learning unconstrained MDPs under certain conditions.
- Addresses significant safety concerns in real-world reinforcement learning applications.
Computer Science > Machine Learning arXiv:2602.15076 (cs) [Submitted on 16 Feb 2026] Title:Near-Optimal Sample Complexity for Online Constrained MDPs Authors:Chang Liu, Yunfan Li, Lin F. Yang View a PDF of the paper titled Near-Optimal Sample Complexity for Online Constrained MDPs, by Chang Liu and 2 other authors View PDF HTML (experimental) Abstract:Safety is a fundamental challenge in reinforcement learning (RL), particularly in real-world applications such as autonomous driving, robotics, and healthcare. To address this, Constrained Markov Decision Processes (CMDPs) are commonly used to enforce safety constraints while optimizing performance. However, existing methods often suffer from significant safety violations or require a high sample complexity to generate near-optimal policies. We address two settings: relaxed feasibility, where small violations are allowed, and strict feasibility, where no violation is allowed. We propose a model-based primal-dual algorithm that balances regret and bounded constraint violations, drawing on techniques from online RL and constrained optimization. For relaxed feasibility, we prove that our algorithm returns an $\varepsilon$-optimal policy with $\varepsilon$-bounded violation with arbitrarily high probability, requiring $\tilde{O}\left(\frac{SAH^3}{\varepsilon^2}\right)$ learning episodes, matching the lower bound for unconstrained MDPs. For strict feasibility, we prove that our algorithm returns an $\varepsilon$-optimal policy wit...