[2511.01107] SLAP: Shortcut Learning for Abstract Planning
About this article
Abstract page for arXiv paper 2511.01107: SLAP: Shortcut Learning for Abstract Planning
Computer Science > Robotics arXiv:2511.01107 (cs) [Submitted on 2 Nov 2025 (v1), last revised 28 Feb 2026 (this version, v2)] Title:SLAP: Shortcut Learning for Abstract Planning Authors:Y. Isabel Liu, Bowen Li, Benjamin Eysenbach, Tom Silver View a PDF of the paper titled SLAP: Shortcut Learning for Abstract Planning, by Y. Isabel Liu and 3 other authors View PDF HTML (experimental) Abstract:Long-horizon decision-making with sparse rewards and continuous states and actions remains a fundamental challenge in AI and robotics. Task and motion planning (TAMP) is a model-based framework that addresses this challenge by planning hierarchically with abstract actions (options). These options are manually defined, limiting the agent to behaviors that we as human engineers know how to program (pick, place, move). In this work, we propose Shortcut Learning for Abstract Planning (SLAP), a method that leverages existing TAMP options to automatically discover new ones. Our key idea is to use model-free reinforcement learning (RL) to learn shortcuts in the abstract planning graph induced by the existing options in TAMP. Without any additional assumptions or inputs, shortcut learning leads to shorter solutions than pure planning, and higher task success rates than flat and hierarchical RL. Qualitatively, SLAP discovers dynamic physical improvisations (e.g., slap, wiggle, wipe) that differ significantly from the manually-defined ones. In experiments in four simulated robotic environments, ...