[2602.16837] A Residual-Aware Theory of Position Bias in Transformers
Summary
This paper presents a residual-aware theory explaining the position bias in Transformers, revealing how residual connections prevent attention collapse and induce a U-shaped position bias.
Why It Matters
Understanding position bias in Transformers is crucial for improving model performance in NLP tasks. This research provides insights into the architectural factors that influence attention distribution, which can inform future model designs and applications in AI.
Key Takeaways
- Transformers exhibit a systematic position bias favoring certain token positions.
- Residual connections play a critical role in preventing attention collapse in deep models.
- The study identifies a U-shaped position bias, concentrating attention on early and late tokens.
- This research addresses the 'Lost-in-the-Middle' phenomenon in Transformer models.
- The findings can guide future architectural improvements in NLP models.
Computer Science > Machine Learning arXiv:2602.16837 (cs) [Submitted on 18 Feb 2026] Title:A Residual-Aware Theory of Position Bias in Transformers Authors:Hanna Herasimchyk, Robin Labryga, Tomislav Prusina, Sören Laue View a PDF of the paper titled A Residual-Aware Theory of Position Bias in Transformers, by Hanna Herasimchyk and 3 other authors View PDF HTML (experimental) Abstract:Transformer models systematically favor certain token positions, yet the architectural origins of this position bias remain poorly understood. Under causal masking at infinite depth, prior theoretical analyses of attention rollout predict an inevitable collapse of attention onto the first token. Such collapse, however, does not occur in practice. We resolve this discrepancy with a residual-aware theory of cumulative attention rollout. By incorporating residual connections, we show that this architectural component prevents collapse under realistic conditions. At finite depth, we prove that causal Transformers induce a U-shaped position bias, with attention concentrating on early and late tokens. This result provides a principled architectural explanation for the Lost-in-the-Middle phenomenon. Subjects: Machine Learning (cs.LG) Cite as: arXiv:2602.16837 [cs.LG] (or arXiv:2602.16837v1 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2602.16837 Focus to learn more arXiv-issued DOI via DataCite (pending registration) Submission history From: Soeren Laue [view email] [v1] Wed, 18 Feb 20...