[2604.03480] Large Language Models Align with the Human Brain during Creative Thinking
About this article
Abstract page for arXiv paper 2604.03480: Large Language Models Align with the Human Brain during Creative Thinking
Quantitative Biology > Neurons and Cognition arXiv:2604.03480 (q-bio) [Submitted on 3 Apr 2026] Title:Large Language Models Align with the Human Brain during Creative Thinking Authors:Mete Ismayilzada, Simone A. Luchini, Abdulkadir Gokce, Badr AlKhamissi, Antoine Bosselut, Antonio Laverghetta Jr., Lonneke van der Plas, Roger E. Beaty View a PDF of the paper titled Large Language Models Align with the Human Brain during Creative Thinking, by Mete Ismayilzada and 7 other authors View PDF HTML (experimental) Abstract:Creative thinking is a fundamental aspect of human cognition, and divergent thinking-the capacity to generate novel and varied ideas-is widely regarded as its core generative engine. Large language models (LLMs) have recently demonstrated impressive performance on divergent thinking tests and prior work has shown that models with higher task performance tend to be more aligned to human brain activity. However, existing brain-LLM alignment studies have focused on passive, non-creative tasks. Here, we explore brain alignment during creative thinking using fMRI data from 170 participants performing the Alternate Uses Task (AUT). We extract representations from LLMs varying in size (270M-72B) and measure alignment to brain responses via Representational Similarity Analysis (RSA), targeting the creativity-related default mode and frontoparietal networks. We find that brain-LLM alignment scales with model size (default mode network only) and idea originality (both netw...