[2602.21198] Learning from Trials and Errors: Reflective Test-Time Planning for Embodied LLMs
Summary
This article presents a novel approach called Reflective Test-Time Planning for embodied LLMs, enabling robots to learn from mistakes through reflection, enhancing task performance in complex environments.
Why It Matters
As robotics and AI continue to evolve, integrating reflective learning mechanisms is crucial for improving autonomous decision-making. This research addresses the limitations of current embodied LLMs, providing a framework that could significantly enhance their effectiveness in real-world applications.
Key Takeaways
- Reflective Test-Time Planning combines reflection-in-action and reflection-on-action for better decision-making.
- The framework allows robots to learn from past mistakes, improving long-term performance.
- Experiments show significant performance gains on newly designed benchmarks.
- Qualitative analyses demonstrate effective behavioral corrections through reflection.
- This approach has implications for enhancing AI safety and reliability in robotics.
Computer Science > Machine Learning arXiv:2602.21198 (cs) [Submitted on 24 Feb 2026] Title:Learning from Trials and Errors: Reflective Test-Time Planning for Embodied LLMs Authors:Yining Hong, Huang Huang, Manling Li, Li Fei-Fei, Jiajun Wu, Yejin Choi View a PDF of the paper titled Learning from Trials and Errors: Reflective Test-Time Planning for Embodied LLMs, by Yining Hong and 5 other authors View PDF HTML (experimental) Abstract:Embodied LLMs endow robots with high-level task reasoning, but they cannot reflect on what went wrong or why, turning deployment into a sequence of independent trials where mistakes repeat rather than accumulate into experience. Drawing upon human reflective practitioners, we introduce Reflective Test-Time Planning, which integrates two modes of reflection: \textit{reflection-in-action}, where the agent uses test-time scaling to generate and score multiple candidate actions using internal reflections before execution; and \textit{reflection-on-action}, which uses test-time training to update both its internal reflection model and its action policy based on external reflections after execution. We also include retrospective reflection, allowing the agent to re-evaluate earlier decisions and perform model updates with hindsight for proper long-horizon credit assignment. Experiments on our newly-designed Long-Horizon Household benchmark and MuJoCo Cupboard Fitting benchmark show significant gains over baseline models, with ablative studies valida...