[2410.13648] SimpleToM: Exposing the Gap between Explicit ToM Inference and Implicit ToM Application in LLMs
About this article
Abstract page for arXiv paper 2410.13648: SimpleToM: Exposing the Gap between Explicit ToM Inference and Implicit ToM Application in LLMs
Computer Science > Computation and Language arXiv:2410.13648 (cs) [Submitted on 17 Oct 2024 (v1), last revised 2 Mar 2026 (this version, v2)] Title:SimpleToM: Exposing the Gap between Explicit ToM Inference and Implicit ToM Application in LLMs Authors:Yuling Gu, Oyvind Tafjord, Hyunwoo Kim, Jared Moore, Ronan Le Bras, Peter Clark, Yejin Choi View a PDF of the paper titled SimpleToM: Exposing the Gap between Explicit ToM Inference and Implicit ToM Application in LLMs, by Yuling Gu and 6 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) are increasingly tested for a "Theory of Mind" (ToM) - the ability to attribute mental states to oneself and others. Yet most evaluations stop at explicit belief attribution in classical toy stories or stylized tasks, leaving open the questions of whether LLMs can implicitly apply such knowledge to predict human behavior, or to judge an observed behavior, in diverse scenarios. We introduce SimpleToM, a benchmark that advances ToM evaluation along two novel axes. First, it probes multiple levels of ToM reasoning, from mental state inference (explicit ToM) to behavior prediction and judgment (applied ToM). Second, it situates these tasks in diverse, everyday scenarios - such as supermarkets, hospitals, schools, and offices - where information asymmetries naturally arise (e.g., hidden defects in grocery store items, incomplete information in provider-patient interactions, or restricted access to locked devices). Si...