[2602.00428] When Agents "Misremember" Collectively: Exploring the Mandela Effect in LLM-based Multi-Agent Systems
About this article
Abstract page for arXiv paper 2602.00428: When Agents "Misremember" Collectively: Exploring the Mandela Effect in LLM-based Multi-Agent Systems
Computer Science > Computation and Language arXiv:2602.00428 (cs) [Submitted on 31 Jan 2026 (v1), last revised 1 Mar 2026 (this version, v2)] Title:When Agents "Misremember" Collectively: Exploring the Mandela Effect in LLM-based Multi-Agent Systems Authors:Naen Xu, Hengyu An, Shuo Shi, Jinghuai Zhang, Chunyi Zhou, Changjiang Li, Tianyu Du, Zhihui Fu, Jun Wang, Shouling Ji View a PDF of the paper titled When Agents "Misremember" Collectively: Exploring the Mandela Effect in LLM-based Multi-Agent Systems, by Naen Xu and 9 other authors View PDF HTML (experimental) Abstract:Recent advancements in large language models (LLMs) have significantly enhanced the capabilities of collaborative multi-agent systems, enabling them to address complex challenges. However, within these multi-agent systems, the susceptibility of agents to collective cognitive biases remains an underexplored issue. A compelling example is the Mandela effect, a phenomenon where groups collectively misremember past events as a result of false details reinforced through social influence and internalized misinformation. This vulnerability limits our understanding of memory bias in multi-agent systems and raises ethical concerns about the potential spread of misinformation. In this paper, we conduct a comprehensive study on the Mandela effect in LLM-based multi-agent systems, focusing on its existence, causing factors, and mitigation strategies. We propose MANBENCH, a novel benchmark designed to evaluate agent b...