[2603.26089] Selective Deficits in LLM Mental Self-Modeling in a Behavior-Based Test of Theory of Mind
About this article
Abstract page for arXiv paper 2603.26089: Selective Deficits in LLM Mental Self-Modeling in a Behavior-Based Test of Theory of Mind
Computer Science > Machine Learning arXiv:2603.26089 (cs) [Submitted on 27 Mar 2026] Title:Selective Deficits in LLM Mental Self-Modeling in a Behavior-Based Test of Theory of Mind Authors:Christopher Ackerman View a PDF of the paper titled Selective Deficits in LLM Mental Self-Modeling in a Behavior-Based Test of Theory of Mind, by Christopher Ackerman View PDF Abstract:The ability to represent oneself and others as agents with knowledge, intentions, and belief states that guide their behavior - Theory of Mind - is a human universal that enables us to navigate - and manipulate - the social world. It is supported by our ability to form mental models of ourselves and others. Its ubiquity in human affairs entails that LLMs have seen innumerable examples of it in their training data and therefore may have learned to mimic it, but whether they have actually learned causal models that they can deploy in arbitrary settings is unclear. We therefore develop a novel experimental paradigm that requires that subjects form representations of the mental states of themselves and others and act on them strategically rather than merely describe them. We test a wide range of leading open and closed source LLMs released since 2024, as well as human subjects, on this paradigm. We find that 1) LLMs released before mid-2025 fail at all of our tasks, 2) more recent LLMs achieve human-level performance on modeling the cognitive states of others, and 3) even frontier LLMs fail at our self-modelin...