[2511.00206] Addressing Longstanding Challenges in Cognitive Science with Language Models
About this article
Abstract page for arXiv paper 2511.00206: Addressing Longstanding Challenges in Cognitive Science with Language Models
Computer Science > Artificial Intelligence arXiv:2511.00206 (cs) [Submitted on 31 Oct 2025 (v1), last revised 27 Feb 2026 (this version, v2)] Title:Addressing Longstanding Challenges in Cognitive Science with Language Models Authors:Dirk U. Wulff, Rui Mata View a PDF of the paper titled Addressing Longstanding Challenges in Cognitive Science with Language Models, by Dirk U. Wulff and Rui Mata View PDF HTML (experimental) Abstract:Cognitive science faces ongoing challenges in research integration, formalization, conceptual clarity, and other areas, in part due to its multifaceted and interdisciplinary nature. Recent advances in artificial intelligence, particularly the development of language models, offer tools that may help to address these longstanding issues. We outline the current capabilities and limitations of language models in these domains, including potential pitfalls. Taken together, we conclude that language models could serve as tools for a more integrative and cumulative cognitive science when used judiciously to complement, rather than replace, human agency. Subjects: Artificial Intelligence (cs.AI); Computation and Language (cs.CL) Cite as: arXiv:2511.00206 [cs.AI] (or arXiv:2511.00206v2 [cs.AI] for this version) https://doi.org/10.48550/arXiv.2511.00206 Focus to learn more arXiv-issued DOI via DataCite Submission history From: Dirk U. Wulff [view email] [v1] Fri, 31 Oct 2025 19:08:48 UTC (2,912 KB) [v2] Fri, 27 Feb 2026 23:16:14 UTC (989 KB) Full-text ...