[2601.10120] TopoDIM: One-shot Topology Generation of Diverse Interaction Modes for Multi-Agent Systems

[2601.10120] TopoDIM: One-shot Topology Generation of Diverse Interaction Modes for Multi-Agent Systems

arXiv - AI 3 min read

About this article

Abstract page for arXiv paper 2601.10120: TopoDIM: One-shot Topology Generation of Diverse Interaction Modes for Multi-Agent Systems

Computer Science > Multiagent Systems arXiv:2601.10120 (cs) [Submitted on 15 Jan 2026 (v1), last revised 16 Apr 2026 (this version, v2)] Title:TopoDIM: One-shot Topology Generation of Diverse Interaction Modes for Multi-Agent Systems Authors:Rui Sun, Jie Ding, Chenghua Gong, Tianjun Gu, Yihang Jiang, Juyuan Zhang, Liming Pan, Linyuan Lü View a PDF of the paper titled TopoDIM: One-shot Topology Generation of Diverse Interaction Modes for Multi-Agent Systems, by Rui Sun and 7 other authors View PDF Abstract:Optimizing communication topology in LLM-based multi-agent system is critical for enabling collective intelligence. Existing methods mainly rely on spatio-temporal interaction paradigms, where the sequential execution of multi-round dialogues incurs high latency and computation. Motivated by the recent insights that evaluation and debate mechanisms can improve problem-solving in multi-agent systems, we propose TopoDIM, a framework for one-shot Topology generation with Diverse Interaction Modes. Designed for decentralized execution to enhance adaptability and privacy, TopoDIM enables agents to autonomously construct heterogeneous communication without iterative coordination, achieving token efficiency and improved task performance. Experiments demonstrate that TopoDIM reduces total token consumption by 46.41% while improving average performance by 1.50% over state-of-the-art methods. Moreover, the framework exhibits strong adaptability in organizing communication among het...

Originally published on April 17, 2026. Curated by AI News.

Related Articles

[2603.13683] Preconditioned Test-Time Adaptation for Out-of-Distribution Debiasing in Narrative Generation
Llms

[2603.13683] Preconditioned Test-Time Adaptation for Out-of-Distribution Debiasing in Narrative Generation

Abstract page for arXiv paper 2603.13683: Preconditioned Test-Time Adaptation for Out-of-Distribution Debiasing in Narrative Generation

arXiv - AI · 3 min ·
[2602.03295] POP: Prefill-Only Pruning for Efficient Large Model Inference
Llms

[2602.03295] POP: Prefill-Only Pruning for Efficient Large Model Inference

Abstract page for arXiv paper 2602.03295: POP: Prefill-Only Pruning for Efficient Large Model Inference

arXiv - AI · 4 min ·
[2601.15488] Multi-Persona Thinking for Bias Mitigation in Large Language Models
Llms

[2601.15488] Multi-Persona Thinking for Bias Mitigation in Large Language Models

Abstract page for arXiv paper 2601.15488: Multi-Persona Thinking for Bias Mitigation in Large Language Models

arXiv - AI · 3 min ·
[2601.14724] HERMES: KV Cache as Hierarchical Memory for Efficient Streaming Video Understanding
Llms

[2601.14724] HERMES: KV Cache as Hierarchical Memory for Efficient Streaming Video Understanding

Abstract page for arXiv paper 2601.14724: HERMES: KV Cache as Hierarchical Memory for Efficient Streaming Video Understanding

arXiv - AI · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime