[2503.13401] Levels of Analysis for Large Language Models
About this article
Abstract page for arXiv paper 2503.13401: Levels of Analysis for Large Language Models
Computer Science > Computation and Language arXiv:2503.13401 (cs) [Submitted on 17 Mar 2025 (v1), last revised 21 Mar 2026 (this version, v3)] Title:Levels of Analysis for Large Language Models Authors:Alexander Y. Ku, Declan Campbell, Xuechunzi Bai, Jiayi Geng, Ryan Liu, Raja Marjieh, R. Thomas McCoy, Andrew Nam, Ilia Sucholutsky, Veniamin Veselovsky, Liyi Zhang, Jian-Qiao Zhu, Thomas L. Griffiths View a PDF of the paper titled Levels of Analysis for Large Language Models, by Alexander Y. Ku and 12 other authors View PDF HTML (experimental) Abstract:Modern artificial intelligence systems, such as large language models, are increasingly powerful but also increasingly hard to understand. Recognizing this problem as analogous to the historical difficulties in understanding the human mind, we argue that methods developed in cognitive science can be useful for understanding large language models. We propose a framework for applying these methods based on the levels of analysis that David Marr proposed for studying information processing systems. By revisiting established cognitive science techniques relevant to each level and illustrating their potential to yield insights into the behavior and internal organization of large language models, we aim to provide a toolkit for making sense of these new kinds of minds. Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI) Cite as: arXiv:2503.13401 [cs.CL] (or arXiv:2503.13401v3 [cs.CL] for this version) htt...