[2601.04268] Replacing Tunable Parameters in Weather and Climate Models with State-Dependent Functions using Reinforcement Learning
About this article
Abstract page for arXiv paper 2601.04268: Replacing Tunable Parameters in Weather and Climate Models with State-Dependent Functions using Reinforcement Learning
Computer Science > Machine Learning arXiv:2601.04268 (cs) [Submitted on 7 Jan 2026 (v1), last revised 8 Apr 2026 (this version, v2)] Title:Replacing Tunable Parameters in Weather and Climate Models with State-Dependent Functions using Reinforcement Learning Authors:Pritthijit Nath, Sebastian Schemm, Henry Moss, Peter Haynes, Emily Shuckburgh, Mark J. Webb View a PDF of the paper titled Replacing Tunable Parameters in Weather and Climate Models with State-Dependent Functions using Reinforcement Learning, by Pritthijit Nath and 5 other authors View PDF Abstract:Weather and climate models rely on parametrisations to represent unresolved sub-grid processes. Traditional schemes rely on fixed coefficients that are weakly constrained and tuned offline, contributing to persistent biases that limit their ability to adapt to underlying physics. This study presents a framework that learns components of parametrisation schemes online as a function of the evolving model state using reinforcement learning (RL) and evaluates RL-driven parameter updates across idealised testbeds spanning a simple climate bias correction (SCBC), a radiative-convective equilibrium (RCE), and a zonal mean energy balance model (EBM) with single-agent and federated multi-agent settings. Across nine RL algorithms, Truncated Quantile Critics (TQC), Deep Deterministic Policy Gradient (DDPG), and Twin Delayed DDPG (TD3) achieved the highest skill and stable convergence, with performance assessed against a static b...