[2603.28074] Koopman-based surrogate modeling for reinforcement-learning-control of Rayleigh-Benard convection
About this article
Abstract page for arXiv paper 2603.28074: Koopman-based surrogate modeling for reinforcement-learning-control of Rayleigh-Benard convection
Computer Science > Machine Learning arXiv:2603.28074 (cs) [Submitted on 30 Mar 2026] Title:Koopman-based surrogate modeling for reinforcement-learning-control of Rayleigh-Benard convection Authors:Tim Plotzki, Sebastian Peitz View a PDF of the paper titled Koopman-based surrogate modeling for reinforcement-learning-control of Rayleigh-Benard convection, by Tim Plotzki and Sebastian Peitz View PDF HTML (experimental) Abstract:Training reinforcement learning (RL) agents to control fluid dynamics systems is computationally expensive due to the high cost of direct numerical simulations (DNS) of the governing equations. Surrogate models offer a promising alternative by approximating the dynamics at a fraction of the computational cost, but their feasibility as training environments for RL is limited by distribution shifts, as policies induce state distributions not covered by the surrogate training data. In this work, we investigate the use of Linear Recurrent Autoencoder Networks (LRANs) for accelerating RL-based control of 2D Rayleigh-Bénard convection. We evaluate two training strategies: a surrogate trained on precomputed data generated with random actions, and a policy-aware surrogate trained iteratively using data collected from an evolving policy. Our results show that while surrogate-only training leads to reduced control performance, combining surrogates with DNS in a pretraining scheme recovers state-of-the-art performance while reducing training time by more than 40%...