[2603.04289] IPD: Boosting Sequential Policy with Imaginary Planning Distillation in Offline Reinforcement Learning
About this article
Abstract page for arXiv paper 2603.04289: IPD: Boosting Sequential Policy with Imaginary Planning Distillation in Offline Reinforcement Learning
Computer Science > Machine Learning arXiv:2603.04289 (cs) [Submitted on 4 Mar 2026] Title:IPD: Boosting Sequential Policy with Imaginary Planning Distillation in Offline Reinforcement Learning Authors:Yihao Qin, Yuanfei Wang, Hang Zhou, Peiran Liu, Hao Dong, Yiding Ji View a PDF of the paper titled IPD: Boosting Sequential Policy with Imaginary Planning Distillation in Offline Reinforcement Learning, by Yihao Qin and 5 other authors View PDF HTML (experimental) Abstract:Decision transformer based sequential policies have emerged as a powerful paradigm in offline reinforcement learning (RL), yet their efficacy remains constrained by the quality of static datasets and inherent architectural limitations. Specifically, these models often struggle to effectively integrate suboptimal experiences and fail to explicitly plan for an optimal policy. To bridge this gap, we propose \textbf{Imaginary Planning Distillation (IPD)}, a novel framework that seamlessly incorporates offline planning into data generation, supervised training, and online inference. Our framework first learns a world model equipped with uncertainty measures and a quasi-optimal value function from the offline data. These components are utilized to identify suboptimal trajectories and augment them with reliable, imagined optimal rollouts generated via Model Predictive Control (MPC). A Transformer-based sequential policy is then trained on this enriched dataset, complemented by a value-guided objective that promote...