[2603.00043] Reinforcement Learning for Control with Probabilistic Stability Guarantee: A Finite-Sample Approach
About this article
Abstract page for arXiv paper 2603.00043: Reinforcement Learning for Control with Probabilistic Stability Guarantee: A Finite-Sample Approach
Computer Science > Machine Learning arXiv:2603.00043 (cs) [Submitted on 9 Feb 2026] Title:Reinforcement Learning for Control with Probabilistic Stability Guarantee: A Finite-Sample Approach Authors:Minghao Han, Lixian Zhang, Chenliang Liu, Zhipeng Zhou, Jun Wang, Wei Pan View a PDF of the paper titled Reinforcement Learning for Control with Probabilistic Stability Guarantee: A Finite-Sample Approach, by Minghao Han and 5 other authors View PDF HTML (experimental) Abstract:This paper presents a novel approach to reinforcement learning (RL) for control systems that provides probabilistic stability guarantees using finite data. Leveraging Lyapunov's method, we propose a probabilistic stability theorem that ensures mean square stability using only a finite number of sampled trajectories. The probability of stability increases with the number and length of trajectories, converging to certainty as data size grows. Additionally, we derive a policy gradient theorem for stabilizing policy learning and develop an RL algorithm, L-REINFORCE, that extends the classical REINFORCE algorithm to stabilization problems. The effectiveness of L-REINFORCE is demonstrated through simulations on a Cartpole task, where it outperforms the baseline in ensuring stability. This work bridges a critical gap between RL and control theory, enabling stability analysis and controller design in a model-free framework with finite data. Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI) Cite ...