[2510.01685] How Do Language Models Compose Functions?
About this article
Abstract page for arXiv paper 2510.01685: How Do Language Models Compose Functions?
Computer Science > Computation and Language arXiv:2510.01685 (cs) [Submitted on 2 Oct 2025 (v1), last revised 8 May 2026 (this version, v2)] Title:How Do Language Models Compose Functions? Authors:Apoorv Khandelwal, Ellie Pavlick View a PDF of the paper titled How Do Language Models Compose Functions?, by Apoorv Khandelwal and Ellie Pavlick View PDF HTML (experimental) Abstract:While large language models (LLMs) appear to be increasingly capable of solving compositional tasks, it is an open question whether they do so using compositional mechanisms. In this work, we investigate how feedforward LLMs solve two-hop factual recall tasks, which can be expressed compositionally as $g(f(x))$. We first confirm that modern LLMs continue to suffer from the "compositionality gap", i.e. their ability to compute both $z = f(x)$ and $y = g(z)$ does not entail their ability to compute the composition $y = g(f(x))$. We then decode residual stream representations and identify two processing mechanisms: one which solves tasks $\textit{compositionally}$, computing $f(x)$ along the way to $g(f(x))$, and one which solves them $\textit{directly}$, without any detectable signature of the intermediate variable $f(x)$. Finally, we find that embedding space geometry is strongly related to which mechanism is employed, where the idiomatic mechanism is dominant when tasks are represented by translations from $x$ to $g(f(x))$ in the embedding spaces. We fully release our data and code at: this https UR...