[2510.18871] How Do LLMs Use Their Depth?
About this article
Abstract page for arXiv paper 2510.18871: How Do LLMs Use Their Depth?
Computer Science > Computation and Language arXiv:2510.18871 (cs) [Submitted on 21 Oct 2025 (v1), last revised 1 Mar 2026 (this version, v2)] Title:How Do LLMs Use Their Depth? Authors:Akshat Gupta, Jay Yeung, Gopala Anumanchipalli, Anna Ivanova View a PDF of the paper titled How Do LLMs Use Their Depth?, by Akshat Gupta and 3 other authors View PDF HTML (experimental) Abstract:Growing evidence suggests that large language models do not use their depth uniformly, yet we still lack a fine-grained understanding of their layer-wise prediction dynamics. In this paper, we trace the intermediate representations of several open-weight models during inference and reveal a structured and nuanced use of depth. Specifically, we propose a "Guess-then-Refine" framework that explains how LLMs internally structure their computations to make predictions. We first show that the top-ranked predictions in early LLM layers are composed primarily of high-frequency tokens, which act as statistical guesses proposed by the model due to the lack of contextual information. As contextual information develops deeper into the model, these initial guesses get refined into contextually appropriate tokens. We then examine the dynamic usage of layer depth through three case studies. (i) Multiple-choice task analysis shows that the model identifies appropriate options within the first half of the model and finalizes the response in the latter half. (ii) Fact recall task analysis shows that in a multi-token...