[2603.00142] Evaluating Theory of Mind and Internal Beliefs in LLM-Based Multi-Agent Systems
About this article
Abstract page for arXiv paper 2603.00142: Evaluating Theory of Mind and Internal Beliefs in LLM-Based Multi-Agent Systems
Computer Science > Multiagent Systems arXiv:2603.00142 (cs) [Submitted on 24 Feb 2026] Title:Evaluating Theory of Mind and Internal Beliefs in LLM-Based Multi-Agent Systems Authors:Adam Kostka, Jarosław A. Chudziak View a PDF of the paper titled Evaluating Theory of Mind and Internal Beliefs in LLM-Based Multi-Agent Systems, by Adam Kostka and 1 other authors View PDF HTML (experimental) Abstract:LLM-based MAS are gaining popularity due to their potential for collaborative problem-solving enhanced by advances in natural language comprehension, reasoning, and planning. Research in Theory of Mind (ToM) and Belief-Desire-Intention (BDI) models has the potential to further improve the agent's interaction and decision-making in such systems. However, collaborative intelligence in dynamic worlds remains difficult to accomplish since LLM performance in multi-agent worlds is extremely variable. Simply adding cognitive mechanisms like ToM and internal beliefs does not automatically result in improved coordination. The interplay between these mechanisms, particularly in relation to formal logic verification, remains largely underexplored in different LLMs. This work investigates: How do internal belief mechanisms, including symbolic solvers and Theory of Mind, influence collaborative decision-making in LLM-based multi-agent systems, and how does the interplay of those components influence system accuracy? We introduce a novel multi-agent architecture integrating ToM, BDI-style inter...