[2603.04029] Self-adapting Robotic Agents through Online Continual Reinforcement Learning with World Model Feedback
About this article
Abstract page for arXiv paper 2603.04029: Self-adapting Robotic Agents through Online Continual Reinforcement Learning with World Model Feedback
Computer Science > Robotics arXiv:2603.04029 (cs) [Submitted on 4 Mar 2026] Title:Self-adapting Robotic Agents through Online Continual Reinforcement Learning with World Model Feedback Authors:Fabian Domberg, Georg Schildbach View a PDF of the paper titled Self-adapting Robotic Agents through Online Continual Reinforcement Learning with World Model Feedback, by Fabian Domberg and 1 other authors View PDF HTML (experimental) Abstract:As learning-based robotic controllers are typically trained offline and deployed with fixed parameters, their ability to cope with unforeseen changes during operation is limited. Biologically inspired, this work presents a framework for online Continual Reinforcement Learning that enables automated adaptation during deployment. Building on DreamerV3, a model-based Reinforcement Learning algorithm, the proposed method leverages world model prediction residuals to detect out-of-distribution events and automatically trigger finetuning. Adaptation progress is monitored using both task-level performance signals and internal training metrics, allowing convergence to be assessed without external supervision and domain knowledge. The approach is validated on a variety of contemporary continuous control problems, including a quadruped robot in high-fidelity simulation, and a real-world model vehicle. Relevant metrics and their interpretation are presented and discussed, as well as resulting trade-offs described. The results sketch out how autonomous rob...