[2507.09875] Function Induction and Task Generalization: An Interpretability Study with Off-by-One Addition
About this article
Abstract page for arXiv paper 2507.09875: Function Induction and Task Generalization: An Interpretability Study with Off-by-One Addition
Computer Science > Computation and Language arXiv:2507.09875 (cs) [Submitted on 14 Jul 2025 (v1), last revised 4 Mar 2026 (this version, v3)] Title:Function Induction and Task Generalization: An Interpretability Study with Off-by-One Addition Authors:Qinyuan Ye, Robin Jia, Xiang Ren View a PDF of the paper titled Function Induction and Task Generalization: An Interpretability Study with Off-by-One Addition, by Qinyuan Ye and 2 other authors View PDF HTML (experimental) Abstract:Large language models demonstrate the intriguing ability to perform unseen tasks via in-context learning. However, it remains unclear what mechanisms inside the model drive such task-level generalization. In this work, we approach this question through the lens of off-by-one addition (i.e., 1+1=3, 2+2=5, 3+3=?), a two-step, counterfactual task with an unexpected +1 function as a second step. Leveraging circuit-style interpretability techniques such as path patching, we analyze the models' internal computations behind their performance and present three key findings. First, we identify a mechanism that explains the model's generalization from standard addition to off-by-one addition. It resembles the induction head mechanism described in prior work, yet operates at a higher level of abstraction; we therefore term it "function induction" in this work. Second, we show that the induction of the +1 function is governed by multiple attention heads in parallel, each of which emits a distinct piece of the +...