[2602.16085] Language Statistics and False Belief Reasoning: Evidence from 41 Open-Weight LMs

[2602.16085] Language Statistics and False Belief Reasoning: Evidence from 41 Open-Weight LMs

arXiv - AI 4 min read Article

Summary

This article investigates the mental state reasoning of language models (LMs) using 41 open-weight models, revealing insights into their capacity to understand false beliefs and human cognition.

Why It Matters

Understanding how language models process mental state reasoning can enhance theories of human cognition and improve AI development. This research highlights the limitations of current models and the potential of open-weight LMs to provide deeper insights into cognitive processes.

Key Takeaways

  • The study assesses 41 open-weight LMs for their ability to reason about false beliefs.
  • 34% of the LMs demonstrated sensitivity to implied knowledge states.
  • Larger LMs showed increased sensitivity and predictive power in reasoning tasks.
  • The research suggests that language distributional statistics can explain some aspects of human cognition.
  • Findings indicate the need for larger samples of open-weight LMs to better evaluate cognitive theories.

Computer Science > Computation and Language arXiv:2602.16085 (cs) [Submitted on 17 Feb 2026] Title:Language Statistics and False Belief Reasoning: Evidence from 41 Open-Weight LMs Authors:Sean Trott, Samuel Taylor, Cameron Jones, James A. Michaelov, Pamela D. Rivière View a PDF of the paper titled Language Statistics and False Belief Reasoning: Evidence from 41 Open-Weight LMs, by Sean Trott and 4 other authors View PDF HTML (experimental) Abstract:Research on mental state reasoning in language models (LMs) has the potential to inform theories of human social cognition--such as the theory that mental state reasoning emerges in part from language exposure--and our understanding of LMs themselves. Yet much published work on LMs relies on a relatively small sample of closed-source LMs, limiting our ability to rigorously test psychological theories and evaluate LM capacities. Here, we replicate and extend published work on the false belief task by assessing LM mental state reasoning behavior across 41 open-weight models (from distinct model families). We find sensitivity to implied knowledge states in 34% of the LMs tested; however, consistent with prior work, none fully ``explain away'' the effect in humans. Larger LMs show increased sensitivity and also exhibit higher psychometric predictive power. Finally, we use LM behavior to generate and test a novel hypothesis about human cognition: both humans and LMs show a bias towards attributing false beliefs when knowledge states ...

Related Articles

Llms

[R] GPT-5.4-mini regressed 22pp on vanilla prompting vs GPT-5-mini. Nobody noticed because benchmarks don't test this. Recursive Language Models solved it.

GPT-5.4-mini produces shorter, terser outputs by default. Vanilla accuracy dropped from 69.5% to 47.2% across 12 tasks (1,800 evals). The...

Reddit - Machine Learning · 1 min ·
Llms

built an open source CLI that auto generates AI setup files for your projects just hit 150 stars

hey everyone, been working on this side project called ai-setup and just hit a milestone i wanted to share 150 github stars, 90 PRs merge...

Reddit - Artificial Intelligence · 1 min ·
Llms

built an open source tool that auto generates AI context files for any codebase, 150 stars in

one of the most tedious parts of working with AI coding tools is having to manually write context files every single time. CLAUDE.md, .cu...

Reddit - Artificial Intelligence · 1 min ·
Find out what’s new in the Gemini app in March's Gemini Drop.
Llms

Find out what’s new in the Gemini app in March's Gemini Drop.

Gemini Drops is our regular monthly update on how to get the most out of the Gemini app.

AI Tools & Products · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime