[2508.02441] Computationally efficient Gauss-Newton reinforcement learning for model predictive control
About this article
Abstract page for arXiv paper 2508.02441: Computationally efficient Gauss-Newton reinforcement learning for model predictive control
Electrical Engineering and Systems Science > Systems and Control arXiv:2508.02441 (eess) [Submitted on 4 Aug 2025 (v1), last revised 2 Apr 2026 (this version, v2)] Title:Computationally efficient Gauss-Newton reinforcement learning for model predictive control Authors:Dean Brandner, Sebastien Gros, Sergio Lucia View a PDF of the paper titled Computationally efficient Gauss-Newton reinforcement learning for model predictive control, by Dean Brandner and 2 other authors View PDF HTML (experimental) Abstract:Model predictive control (MPC) is widely used in process control due to its interpretability and ability to handle constraints. As a parametric policy in reinforcement learning (RL), MPC offers strong initial performance and low data requirements compared to black-box policies like neural networks. However, most RL methods rely on first-order updates, which scale well to large parameter spaces but converge at most linearly, making them inefficient when each policy update requires solving an optimal control problem, as is the case with MPC. While MPC policies are typically low parameterized and thus amenable to second-order approaches, existing second-order methods demand second-order policy derivatives, which can be computationally intractable. This work introduces a Gauss-Newton approximation of the deterministic policy Hessian that eliminates the need for second-order policy derivatives, enabling superlinear convergence with minimal computational overhead. To further im...