[2603.14469] Physics-Informed Policy Optimization via Analytic Dynamics Regularization
About this article
Abstract page for arXiv paper 2603.14469: Physics-Informed Policy Optimization via Analytic Dynamics Regularization
Computer Science > Robotics arXiv:2603.14469 (cs) [Submitted on 15 Mar 2026 (v1), last revised 21 Mar 2026 (this version, v2)] Title:Physics-Informed Policy Optimization via Analytic Dynamics Regularization Authors:Namai Chandra, Liu Mohan, Zhihao Gu, Lin Wang View a PDF of the paper titled Physics-Informed Policy Optimization via Analytic Dynamics Regularization, by Namai Chandra and 3 other authors View PDF HTML (experimental) Abstract:Reinforcement learning (RL) has achieved strong performance in robotic control; however, state-of-the-art policy learning methods, such as actor-critic methods, still suffer from high sample complexity and often produce physically inconsistent actions. This limitation stems from neural policies implicitly rediscovering complex physics from data alone, despite accurate dynamics models being readily available in simulators. In this paper, we introduce a novel physics-informed RL framework, called PIPER, that seamlessly integrates physical constraints directly into neural policy optimization with analytical soft physics constraints. At the core of our method is the integration of a differentiable Lagrangian residual as a regularization term within the actor's objective. This residual, extracted from a robot's simulator description, subtly biases policy updates towards dynamically consistent solutions. Crucially, this physics integration is realized through an additional loss term during policy optimization, requiring no alterations to existin...