[2602.24235] SafeGen-LLM: Enhancing Safety Generalization in Task Planning for Robotic Systems
About this article
Abstract page for arXiv paper 2602.24235: SafeGen-LLM: Enhancing Safety Generalization in Task Planning for Robotic Systems
Computer Science > Robotics arXiv:2602.24235 (cs) [Submitted on 27 Feb 2026] Title:SafeGen-LLM: Enhancing Safety Generalization in Task Planning for Robotic Systems Authors:Jialiang Fan, Weizhe Xu, Mengyu Liu, Oleg Sokolsky, Insup Lee, Fangxin Kong View a PDF of the paper titled SafeGen-LLM: Enhancing Safety Generalization in Task Planning for Robotic Systems, by Jialiang Fan and 5 other authors View PDF HTML (experimental) Abstract:Safety-critical task planning in robotic systems remains challenging: classical planners suffer from poor scalability, Reinforcement Learning (RL)-based methods generalize poorly, and base Large Language Models (LLMs) cannot guarantee safety. To address this gap, we propose safety-generalizable large language models, named SafeGen-LLM. SafeGen-LLM can not only enhance the safety satisfaction of task plans but also generalize well to novel safety properties in various domains. We first construct a multi-domain Planning Domain Definition Language 3 (PDDL3) benchmark with explicit safety constraints. Then, we introduce a two-stage post-training framework: Supervised Fine-Tuning (SFT) on a constraint-compliant planning dataset to learn planning syntax and semantics, and Group Relative Policy Optimization (GRPO) guided by fine-grained reward machines derived from formal verification to enforce safety alignment and by curriculum learning to better handle complex tasks. Extensive experiments show that SafeGen-LLM achieves strong safety generalization ...