[2602.15076] Near-Optimal Sample Complexity for Online Constrained MDPs

[2602.15076] Near-Optimal Sample Complexity for Online Constrained MDPs

arXiv - Machine Learning 4 min read Article

Summary

This paper presents a model-based primal-dual algorithm for online constrained Markov Decision Processes (CMDPs), achieving near-optimal sample complexity while ensuring safety in reinforcement learning applications.

Why It Matters

Safety is crucial in reinforcement learning, especially for real-world applications like autonomous driving and healthcare. This research addresses the challenges of enforcing safety constraints while optimizing performance, providing a significant advancement in the field of machine learning.

Key Takeaways

  • Proposes a model-based primal-dual algorithm for CMDPs.
  • Achieves $ ilde{O}( rac{SAH^3}{ ext{ε}^2})$ episodes for relaxed feasibility with bounded violations.
  • Achieves $ ilde{O}( rac{SAH^5}{ ext{ε}^2 ext{ζ}^2})$ episodes for strict feasibility with zero violations.
  • Demonstrates that learning CMDPs is as feasible as learning unconstrained MDPs under certain conditions.
  • Addresses significant safety concerns in real-world reinforcement learning applications.

Computer Science > Machine Learning arXiv:2602.15076 (cs) [Submitted on 16 Feb 2026] Title:Near-Optimal Sample Complexity for Online Constrained MDPs Authors:Chang Liu, Yunfan Li, Lin F. Yang View a PDF of the paper titled Near-Optimal Sample Complexity for Online Constrained MDPs, by Chang Liu and 2 other authors View PDF HTML (experimental) Abstract:Safety is a fundamental challenge in reinforcement learning (RL), particularly in real-world applications such as autonomous driving, robotics, and healthcare. To address this, Constrained Markov Decision Processes (CMDPs) are commonly used to enforce safety constraints while optimizing performance. However, existing methods often suffer from significant safety violations or require a high sample complexity to generate near-optimal policies. We address two settings: relaxed feasibility, where small violations are allowed, and strict feasibility, where no violation is allowed. We propose a model-based primal-dual algorithm that balances regret and bounded constraint violations, drawing on techniques from online RL and constrained optimization. For relaxed feasibility, we prove that our algorithm returns an $\varepsilon$-optimal policy with $\varepsilon$-bounded violation with arbitrarily high probability, requiring $\tilde{O}\left(\frac{SAH^3}{\varepsilon^2}\right)$ learning episodes, matching the lower bound for unconstrained MDPs. For strict feasibility, we prove that our algorithm returns an $\varepsilon$-optimal policy wit...

Related Articles

Nomadic raises $8.4 million to wrangle the data pouring off autonomous vehicles | TechCrunch
Machine Learning

Nomadic raises $8.4 million to wrangle the data pouring off autonomous vehicles | TechCrunch

The company turns footage from robots into structured, searchable datasets with a deep learning model.

TechCrunch - AI · 6 min ·
Machine Learning

The AI Chip War is Just Getting Started

Everyone talks about AI models, but the real bottleneck might be hardware. According to a recent study by Roots Analysis: AI chip market ...

Reddit - Artificial Intelligence · 1 min ·
Robotics

What happens when AI agents can earn and spend real money? I built a small test to find out

I've been sitting with a question for a while: what happens when AI agents aren't just tools to be used, but participants in an economy? ...

Reddit - Artificial Intelligence · 1 min ·
Robotics

AIPass Herald

Some insight onto building a muilti agent autonomous system. This is like the daily newspaper for the project. A quick read to see how ou...

Reddit - Artificial Intelligence · 1 min ·
More in Robotics: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime