[2602.19914] Watson & Holmes: A Naturalistic Benchmark for Comparing Human and LLM Reasoning
Summary
The paper presents the Watson & Holmes benchmark, designed to evaluate AI reasoning capabilities against human reasoning in naturalistic contexts, revealing significant improvements in AI performance over time.
Why It Matters
This research is crucial as it addresses the gap in understanding how AI reasoning compares to human reasoning in real-world scenarios. By using a detective game as a benchmark, it provides a more relatable and practical evaluation of AI models, enhancing their development and application in various fields.
Key Takeaways
- The Watson & Holmes benchmark evaluates AI reasoning in naturalistic contexts.
- AI models showed significant performance improvements over nine months.
- The study highlights systematic differences in AI and human reasoning capabilities.
- Longer case lengths negatively impacted AI model performance.
- Inductive reasoning advantages were observed in early stages of case solving.
Computer Science > Artificial Intelligence arXiv:2602.19914 (cs) [Submitted on 23 Feb 2026] Title:Watson & Holmes: A Naturalistic Benchmark for Comparing Human and LLM Reasoning Authors:Thatchawin Leelawat, Lewis D Griffin View a PDF of the paper titled Watson & Holmes: A Naturalistic Benchmark for Comparing Human and LLM Reasoning, by Thatchawin Leelawat and 1 other authors View PDF Abstract:Existing benchmarks for AI reasoning provide limited insight into how closely these capabilities resemble human reasoning in naturalistic contexts. We present an adaptation of the Watson & Holmes detective tabletop game as a new benchmark designed to evaluate reasoning performance using incrementally presented narrative evidence, open-ended questions and unconstrained language responses. An automated grading system was developed and validated against human assessors to enable scalable and replicable performance evaluation. Results show a clear improvement in AI model performance over time. Over nine months of 2025, model performance rose from the lower quartile of the human comparison group to approximately the top 5%. Around half of this improvement reflects steady advancement across successive model releases, while the remainder corresponds to a marked step change associated with reasoning-oriented model architectures. Systematic differences in the performance of AI models compared to humans, dependent on features of the specific detection puzzle, were mostly absent with the excepti...