[2603.23461] End-to-End Efficient RL for Linear Bellman Complete MDPs with Deterministic Transitions
About this article
Abstract page for arXiv paper 2603.23461: End-to-End Efficient RL for Linear Bellman Complete MDPs with Deterministic Transitions
Computer Science > Machine Learning arXiv:2603.23461 (cs) [Submitted on 24 Mar 2026] Title:End-to-End Efficient RL for Linear Bellman Complete MDPs with Deterministic Transitions Authors:Zakaria Mhammedi, Alexander Rakhlin, Nneka Okolo View a PDF of the paper titled End-to-End Efficient RL for Linear Bellman Complete MDPs with Deterministic Transitions, by Zakaria Mhammedi and 2 other authors View PDF HTML (experimental) Abstract:We study reinforcement learning (RL) with linear function approximation in Markov Decision Processes (MDPs) satisfying \emph{linear Bellman completeness} -- a fundamental setting where the Bellman backup of any linear value function remains linear. While statistically tractable, prior computationally efficient algorithms are either limited to small action spaces or require strong oracle assumptions over the feature space. We provide a computationally efficient algorithm for linear Bellman complete MDPs with \emph{deterministic transitions}, stochastic initial states, and stochastic rewards. For finite action spaces, our algorithm is end-to-end efficient; for large or infinite action spaces, we require only a standard argmax oracle over actions. Our algorithm learns an $\varepsilon$-optimal policy with sample and computational complexity polynomial in the horizon, feature dimension, and $1/\varepsilon$. Subjects: Machine Learning (cs.LG) Cite as: arXiv:2603.23461 [cs.LG] (or arXiv:2603.23461v1 [cs.LG] for this version) https://doi.org/10.48550/...