[2603.02540] A Neuropsychologically Grounded Evaluation of LLM Cognitive Abilities
About this article
Abstract page for arXiv paper 2603.02540: A Neuropsychologically Grounded Evaluation of LLM Cognitive Abilities
Computer Science > Artificial Intelligence arXiv:2603.02540 (cs) [Submitted on 3 Mar 2026] Title:A Neuropsychologically Grounded Evaluation of LLM Cognitive Abilities Authors:Faiz Ghifari Haznitrama, Faeyza Rishad Ardi, Alice Oh View a PDF of the paper titled A Neuropsychologically Grounded Evaluation of LLM Cognitive Abilities, by Faiz Ghifari Haznitrama and 1 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) exhibit a unified "general factor" of capability across 10 benchmarks, a finding confirmed by our factor analysis of 156 models, yet they still struggle with simple, trivial tasks for humans. This is because current benchmarks focus on task completion, failing to probe the foundational cognitive abilities that highlight these behaviors. We address this by introducing the NeuroCognition benchmark, grounded in three adapted neuropsychological tests: Raven's Progressive Matrices (abstract relational reasoning), Spatial Working Memory (maintenance and systematic search), and the Wisconsin Card Sorting Test (cognitive flexibility). Our evaluation reveals that while models perform strongly on text, their performance degrades for images and with increased complexity. Furthermore, we observe that complex reasoning is not universally beneficial, whereas simple, human-like strategies yield partial gains. We also find that NeuroCognition correlates positively with standard general-capability benchmarks, while still measuring distinct cognitive abi...