[2603.20215] Multi-Agent Debate with Memory Masking
About this article
Abstract page for arXiv paper 2603.20215: Multi-Agent Debate with Memory Masking
Computer Science > Computation and Language arXiv:2603.20215 (cs) [Submitted on 3 Mar 2026] Title:Multi-Agent Debate with Memory Masking Authors:Hongduan Tian, Xiao Feng, Ziyuan Zhao, Xiangyu Zhu, Rolan Yan, Bo Han View a PDF of the paper titled Multi-Agent Debate with Memory Masking, by Hongduan Tian and 5 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) have recently demonstrated impressive capabilities in reasoning tasks. Currently, mainstream LLM reasoning frameworks predominantly focus on scaling up inference-time sampling to enhance performance. In particular, among all LLM reasoning frameworks, *multi-agent debate* (MAD), which employs multiple LLMs as agents to perform reasoning in the way of multi-round debate, has emerged as a powerful reasoning paradigm since it allows agents to access previous memories to alleviate fallacious content and refine their reasoning iteratively in each debate round. However, although MAD significantly improves the reasoning capabilities of LLMs, in this paper, we observe that there remain erroneous memories, and LLM agents are vulnerable to these erroneous memories. To explore this phenomenon, we provide a theoretical insight that the performance of MAD is highly dependent on the quality of memories derived from the previous debate, indicating that the existence of erroneous memories poses a threat to the performance of MAD. To address this problem, we introduce a simple yet effective multi-agent debat...