[2604.04157] Readable Minds: Emergent Theory-of-Mind-Like Behavior in LLM Poker Agents
About this article
Abstract page for arXiv paper 2604.04157: Readable Minds: Emergent Theory-of-Mind-Like Behavior in LLM Poker Agents
Computer Science > Artificial Intelligence arXiv:2604.04157 (cs) [Submitted on 5 Apr 2026] Title:Readable Minds: Emergent Theory-of-Mind-Like Behavior in LLM Poker Agents Authors:Hsieh-Ting Lin, Tsung-Yu Hou View a PDF of the paper titled Readable Minds: Emergent Theory-of-Mind-Like Behavior in LLM Poker Agents, by Hsieh-Ting Lin and 1 other authors View PDF HTML (experimental) Abstract:Theory of Mind (ToM) -- the ability to model others' mental states -- is fundamental to human social cognition. Whether large language models (LLMs) can develop ToM has been tested exclusively through static vignettes, leaving open whether ToM-like reasoning can emerge through dynamic interaction. Here we report that autonomous LLM agents playing extended sessions of Texas Hold'em poker progressively develop sophisticated opponent models, but only when equipped with persistent memory. In a 2x2 factorial design crossing memory (present/absent) with domain knowledge (present/absent), each with five replications (N = 20 experiments, ~6,000 agent-hand observations), we find that memory is both necessary and sufficient for ToM-like behavior emergence (Cliff's delta = 1.0, p = 0.008). Agents with memory reach ToM Level 3-5 (predictive to recursive modeling), while agents without memory remain at Level 0 across all replications. Strategic deception grounded in opponent models occurs exclusively in memory-equipped conditions (Fisher's exact p < 0.001). Domain expertise does not gate ToM-like behavi...