[2603.02650] Improving Diffusion Planners by Self-Supervised Action Gating with Energies
About this article
Abstract page for arXiv paper 2603.02650: Improving Diffusion Planners by Self-Supervised Action Gating with Energies
Computer Science > Machine Learning arXiv:2603.02650 (cs) [Submitted on 3 Mar 2026] Title:Improving Diffusion Planners by Self-Supervised Action Gating with Energies Authors:Yuan Lu, Dongqi Han, Yansen Wang, Dongsheng Li View a PDF of the paper titled Improving Diffusion Planners by Self-Supervised Action Gating with Energies, by Yuan Lu and 2 other authors View PDF HTML (experimental) Abstract:Diffusion planners are a strong approach for offline reinforcement learning, but they can fail when value-guided selection favours trajectories that score well yet are locally inconsistent with the environment dynamics, resulting in brittle execution. We propose Self-supervised Action Gating with Energies (SAGE), an inference-time re-ranking method that penalises dynamically inconsistent plans using a latent consistency signal. SAGE trains a Joint-Embedding Predictive Architecture (JEPA) encoder on offline state sequences and an action-conditioned latent predictor for short horizon transitions. At test time, SAGE assigns each sampled candidate an energy given by its latent prediction error and combines this feasibility score with value estimates to select actions. SAGE can integrate into existing diffusion planning pipelines that can sample trajectories and select actions via value scoring; it requires no environment rollouts and no policy re-training. Across locomotion, navigation, and manipulation benchmarks, SAGE improves the performance and robustness of diffusion planners. Subj...