[2602.23804] Actor-Critic Pretraining for Proximal Policy Optimization
About this article
Abstract page for arXiv paper 2602.23804: Actor-Critic Pretraining for Proximal Policy Optimization
Computer Science > Machine Learning arXiv:2602.23804 (cs) [Submitted on 27 Feb 2026] Title:Actor-Critic Pretraining for Proximal Policy Optimization Authors:Andreas Kernbach, Amr Elsheikh, Nicolas Grupp, René Nagel, Marco F. Huber View a PDF of the paper titled Actor-Critic Pretraining for Proximal Policy Optimization, by Andreas Kernbach and 4 other authors View PDF Abstract:Reinforcement learning (RL) actor-critic algorithms enable autonomous learning but often require a large number of environment interactions, which limits their applicability in robotics. Leveraging expert data can reduce the number of required environment interactions. A common approach is actor pretraining, where the actor network is initialized via behavioral cloning on expert demonstrations and subsequently fine-tuned with RL. In contrast, the initialization of the critic network has received little attention, despite its central role in policy optimization. This paper proposes a pretraining approach for actor-critic algorithms like Proximal Policy Optimization (PPO) that uses expert demonstrations to initialize both networks. The actor is pretrained via behavioral cloning, while the critic is pretrained using returns obtained from rollouts of the pretrained policy. The approach is evaluated on 15 simulated robotic manipulation and locomotion tasks. Experimental results show that actor-critic pretraining improves sample efficiency by 86.1% on average compared to no pretraining and by 30.9% to actor...