[2507.16713] A Pragmatist Robot: Learning to Plan Tasks by Experiencing the Real World
Summary
The paper presents PragmaBot, a framework for robotic task planning that utilizes real-world experiences and self-reflection to enhance learning and task success rates.
Why It Matters
This research addresses the limitations of large language models in robotic applications by introducing a pragmatic approach that allows robots to learn from their experiences. The findings could significantly improve the efficiency and adaptability of robots in real-world scenarios, making them more effective in various tasks.
Key Takeaways
- PragmaBot enhances robotic task planning through real-world experience.
- Self-reflection and short-term memory significantly boost task success rates.
- The framework demonstrates improved adaptability in novel tasks using past experiences.
Computer Science > Robotics arXiv:2507.16713 (cs) [Submitted on 22 Jul 2025 (v1), last revised 14 Feb 2026 (this version, v2)] Title:A Pragmatist Robot: Learning to Plan Tasks by Experiencing the Real World Authors:Kaixian Qu, Guowei Lan, René Zurbrügg, Changan Chen, Christopher E. Mower, Haitham Bou-Ammar, Marco Hutter View a PDF of the paper titled A Pragmatist Robot: Learning to Plan Tasks by Experiencing the Real World, by Kaixian Qu and 6 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) have emerged as the dominant paradigm for robotic task planning using natural language instructions. However, trained on general internet data, LLMs are not inherently aligned with the embodiment, skill sets, and limitations of real-world robotic systems. Inspired by the emerging paradigm of verbal reinforcement learning-where LLM agents improve through self-reflection and few-shot learning without parameter updates-we introduce PragmaBot, a framework that enables robots to learn task planning through real-world experience. PragmaBot employs a vision-language model (VLM) as the robot's "brain" and "eye", allowing it to visually evaluate action outcomes and self-reflect on failures. These reflections are stored in a short-term memory (STM), enabling the robot to quickly adapt its behavior during ongoing tasks. Upon task completion, the robot summarizes the lessons learned into its long-term memory (LTM). When facing new tasks, it can leverage retrieval-au...