[2602.10625] To Think or Not To Think, That is The Question for Large Reasoning Models in Theory of Mind Tasks
About this article
Abstract page for arXiv paper 2602.10625: To Think or Not To Think, That is The Question for Large Reasoning Models in Theory of Mind Tasks
Computer Science > Artificial Intelligence arXiv:2602.10625 (cs) [Submitted on 11 Feb 2026 (v1), last revised 28 Feb 2026 (this version, v2)] Title:To Think or Not To Think, That is The Question for Large Reasoning Models in Theory of Mind Tasks Authors:Nanxu Gong, Haotian Li, Sixun Dong, Jianxun Lian, Yanjie Fu, Xing Xie View a PDF of the paper titled To Think or Not To Think, That is The Question for Large Reasoning Models in Theory of Mind Tasks, by Nanxu Gong and 5 other authors View PDF HTML (experimental) Abstract:Theory of Mind (ToM) assesses whether models can infer hidden mental states such as beliefs, desires, and intentions, which is essential for natural social interaction. Although recent progress in Large Reasoning Models (LRMs) has boosted step-by-step inference in mathematics and coding, it is still underexplored whether this benefit transfers to socio-cognitive skills. We present a systematic study of nine advanced Large Language Models (LLMs), comparing reasoning models with non-reasoning models on three representative ToM benchmarks. The results show that reasoning models do not consistently outperform non-reasoning models and sometimes perform worse. A fine-grained analysis reveals three insights. First, slow thinking collapses: accuracy significantly drops as responses grow longer, and larger reasoning budgets hurt performance. Second, moderate and adaptive reasoning benefits performance: constraining reasoning length mitigates failure, while distinct ...