[2401.12546] On Building Myopic MPC Policies using Supervised Learning
About this article
Abstract page for arXiv paper 2401.12546: On Building Myopic MPC Policies using Supervised Learning
Computer Science > Machine Learning arXiv:2401.12546 (cs) This paper has been withdrawn by Dinesh Krishnamoorthy [Submitted on 23 Jan 2024 (v1), last revised 26 Mar 2026 (this version, v3)] Title:On Building Myopic MPC Policies using Supervised Learning Authors:Christopher A. Orrico, Bokan Yang, Dinesh Krishnamoorthy View a PDF of the paper titled On Building Myopic MPC Policies using Supervised Learning, by Christopher A. Orrico and 2 other authors No PDF available, click to view other formats Abstract:The application of supervised learning techniques in combination with model predictive control (MPC) has recently generated significant interest, particularly in the area of approximate explicit MPC, where function approximators like deep neural networks are used to learn the MPC policy via optimal state-action pairs generated offline. While the aim of approximate explicit MPC is to closely replicate the MPC policy, substituting online optimization with a trained neural network, the performance guarantees that come with solving the online optimization problem are typically lost. This paper considers an alternative strategy, where supervised learning is used to learn the optimal value function offline instead of learning the optimal policy. This can then be used as the cost-to-go function in a myopic MPC with a very short prediction horizon, such that the online computation burden reduces significantly without affecting the controller performance. This approach differs from ...