[2603.03704] Large-Language-Model-Guided State Estimation for Partially Observable Task and Motion Planning
About this article
Abstract page for arXiv paper 2603.03704: Large-Language-Model-Guided State Estimation for Partially Observable Task and Motion Planning
Computer Science > Robotics arXiv:2603.03704 (cs) [Submitted on 4 Mar 2026] Title:Large-Language-Model-Guided State Estimation for Partially Observable Task and Motion Planning Authors:Yoonwoo Kim, Raghav Arora, Roberto Martín-Martín, Peter Stone, Ben Abbatematteo, Yoonchang Sung View a PDF of the paper titled Large-Language-Model-Guided State Estimation for Partially Observable Task and Motion Planning, by Yoonwoo Kim and 5 other authors View PDF HTML (experimental) Abstract:Robot planning in partially observable environments, where not all objects are known or visible, is a challenging problem, as it requires reasoning under uncertainty through partially observable Markov decision processes. During the execution of a computed plan, a robot may unexpectedly observe task-irrelevant objects, which are typically ignored by naive planners. In this work, we propose incorporating two types of common-sense knowledge: (1) certain objects are more likely to be found in specific locations; and (2) similar objects are likely to be co-located, while dissimilar objects are less likely to be found together. Manually engineering such knowledge is complex, so we explore leveraging the powerful common-sense reasoning capabilities of large language models (LLMs). Our planning and execution framework, CoCo-TAMP, introduces a hierarchical state estimation that uses LLM-guided information to shape the belief over task-relevant objects, enabling efficient solutions to long-horizon task and mot...