[2604.02260] Model-Based Reinforcement Learning for Control under Time-Varying Dynamics
About this article
Abstract page for arXiv paper 2604.02260: Model-Based Reinforcement Learning for Control under Time-Varying Dynamics
Computer Science > Machine Learning arXiv:2604.02260 (cs) [Submitted on 2 Apr 2026] Title:Model-Based Reinforcement Learning for Control under Time-Varying Dynamics Authors:Klemens Iten, Bruce Lee, Chenhao Li, Lenart Treven, Andreas Krause, Bhavya Sukhija View a PDF of the paper titled Model-Based Reinforcement Learning for Control under Time-Varying Dynamics, by Klemens Iten and 5 other authors View PDF HTML (experimental) Abstract:Learning-based control methods typically assume stationary system dynamics, an assumption often violated in real-world systems due to drift, wear, or changing operating conditions. We study reinforcement learning for control under time-varying dynamics. We consider a continual model-based reinforcement learning setting in which an agent repeatedly learns and controls a dynamical system whose transition dynamics evolve across episodes. We analyze the problem using Gaussian process dynamics models under frequentist variation-budget assumptions. Our analysis shows that persistent non-stationarity requires explicitly limiting the influence of outdated data to maintain calibrated uncertainty and meaningful dynamic regret guarantees. Motivated by these insights, we propose a practical optimistic model-based reinforcement learning algorithm with adaptive data buffer mechanisms and demonstrate improved performance on continuous control benchmarks with non-stationary dynamics. Comments: Subjects: Machine Learning (cs.LG); Robotics (cs.RO) Cite as: arXiv...