[2510.24482] Sample-efficient and Scalable Exploration in Continuous-Time RL
About this article
Abstract page for arXiv paper 2510.24482: Sample-efficient and Scalable Exploration in Continuous-Time RL
Computer Science > Machine Learning arXiv:2510.24482 (cs) [Submitted on 28 Oct 2025 (v1), last revised 2 Mar 2026 (this version, v2)] Title:Sample-efficient and Scalable Exploration in Continuous-Time RL Authors:Klemens Iten, Lenart Treven, Bhavya Sukhija, Florian Dörfler, Andreas Krause View a PDF of the paper titled Sample-efficient and Scalable Exploration in Continuous-Time RL, by Klemens Iten and 4 other authors View PDF HTML (experimental) Abstract:Reinforcement learning algorithms are typically designed for discrete-time dynamics, even though the underlying real-world control systems are often continuous in time. In this paper, we study the problem of continuous-time reinforcement learning, where the unknown system dynamics are represented using nonlinear ordinary differential equations (ODEs). We leverage probabilistic models, such as Gaussian processes and Bayesian neural networks, to learn an uncertainty-aware model of the underlying ODE. Our algorithm, COMBRL, greedily maximizes a weighted sum of the extrinsic reward and model epistemic uncertainty. This yields a scalable and sample-efficient approach to continuous-time model-based RL. We show that COMBRL achieves sublinear regret in the reward-driven setting, and in the unsupervised RL setting (i.e., without extrinsic rewards), we provide a sample complexity bound. In our experiments, we evaluate COMBRL in both standard and unsupervised RL settings and demonstrate that it scales better, is more sample-efficient...