[2603.01292] Integrating LTL Constraints into PPO for Safe Reinforcement Learning
About this article
Abstract page for arXiv paper 2603.01292: Integrating LTL Constraints into PPO for Safe Reinforcement Learning
Computer Science > Machine Learning arXiv:2603.01292 (cs) [Submitted on 1 Mar 2026] Title:Integrating LTL Constraints into PPO for Safe Reinforcement Learning Authors:Maifang Zhang, Hang Yu, Qian Zuo, Cheng Wang, Vaishak Belle, Fengxiang He View a PDF of the paper titled Integrating LTL Constraints into PPO for Safe Reinforcement Learning, by Maifang Zhang and 5 other authors View PDF HTML (experimental) Abstract:This paper proposes Proximal Policy Optimization with Linear Temporal Logic Constraints (PPO-LTL), a framework that integrates safety constraints written in LTL into PPO for safe reinforcement learning. LTL constraints offer rigorous representations of complex safety requirements, such as regulations that broadly exist in robotics, enabling systematic monitoring of safety requirements. Violations against LTL constraints are monitored by limit-deterministic Büchi automata, and then translated by a logic-to-cost mechanism into penalty signals. The signals are further employed for guiding the policy optimization via the Lagrangian scheme. Extensive experiments on the Zones and CARLA environments show that our PPO-LTL can consistently reduce safety violations, while maintaining competitive performance, against the state-of-the-art methods. The code is at this https URL. Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Logic in Computer Science (cs.LO); Robotics (cs.RO) Cite as: arXiv:2603.01292 [cs.LG] (or arXiv:2603.01292v1 [cs.LG] for this vers...