[2602.21670] Hierarchical LLM-Based Multi-Agent Framework with Prompt Optimization for Multi-Robot Task Planning

[2602.21670] Hierarchical LLM-Based Multi-Agent Framework with Prompt Optimization for Multi-Robot Task Planning

arXiv - AI 4 min read Article

Summary

This article presents a novel hierarchical framework for multi-robot task planning using large language models (LLMs) with prompt optimization, enhancing task execution accuracy in complex scenarios.

Why It Matters

As robotics and AI continue to advance, effective multi-robot coordination is crucial for applications ranging from industrial automation to autonomous vehicles. This framework addresses limitations in traditional planning methods by leveraging LLMs, potentially transforming how robots interpret and execute tasks.

Key Takeaways

  • Introduces a hierarchical multi-agent framework for task planning.
  • Utilizes prompt optimization to enhance LLM performance in robotics.
  • Achieves significant improvements in task success rates over previous models.
  • Incorporates meta-prompt sharing for efficient multi-agent coordination.
  • Demonstrates effectiveness on the MAT-THOR benchmark with high success rates.

Computer Science > Robotics arXiv:2602.21670 (cs) [Submitted on 25 Feb 2026] Title:Hierarchical LLM-Based Multi-Agent Framework with Prompt Optimization for Multi-Robot Task Planning Authors:Tomoya Kawabe (1), Rin Takano (1) ((1) NEC Corporation) View a PDF of the paper titled Hierarchical LLM-Based Multi-Agent Framework with Prompt Optimization for Multi-Robot Task Planning, by Tomoya Kawabe (1) and Rin Takano (1) ((1) NEC Corporation) View PDF HTML (experimental) Abstract:Multi-robot task planning requires decomposing natural-language instructions into executable actions for heterogeneous robot teams. Conventional Planning Domain Definition Language (PDDL) planners provide rigorous guarantees but struggle to handle ambiguous or long-horizon missions, while large language models (LLMs) can interpret instructions and propose plans but may hallucinate or produce infeasible actions. We present a hierarchical multi-agent LLM-based planner with prompt optimization: an upper layer decomposes tasks and assigns them to lower-layer agents, which generate PDDL problems solved by a classical planner. When plans fail, the system applies TextGrad-inspired textual-gradient updates to optimize each agent's prompt and thereby improve planning accuracy. In addition, meta-prompts are learned and shared across agents within the same layer, enabling efficient prompt optimization in multi-agent settings. On the MAT-THOR benchmark, our planner achieves success rates of 0.95 on compound tasks, ...

Related Articles

Llms

An attack class that passes every current LLM filter - no payload, no injection signature, no log trace

https://shapingrooms.com/research I published a paper today on something I've been calling postural manipulation. The short version: ordi...

Reddit - Artificial Intelligence · 1 min ·
Llms

[R] An attack class that passes every current LLM filter - no payload, no injection signature, no log trace

https://shapingrooms.com/research I've been documenting what I'm calling postural manipulation: a specific class of language that install...

Reddit - Machine Learning · 1 min ·
There are more AI health tools than ever—but how well do they work? | MIT Technology Review
Llms

There are more AI health tools than ever—but how well do they work? | MIT Technology Review

Earlier this month, Microsoft launched Copilot Health, a new space within its Copilot app where users will be able to connect their medic...

MIT Technology Review · 11 min ·
Llms

What does Gemini think of you?

I noticed that Gemini was referring back to a lot of queries I've made in the past and was using that knowledge to drive follow up prompt...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime