[2602.16837] A Residual-Aware Theory of Position Bias in Transformers

[2602.16837] A Residual-Aware Theory of Position Bias in Transformers

arXiv - Machine Learning 3 min read Article

Summary

This paper presents a residual-aware theory explaining the position bias in Transformers, revealing how residual connections prevent attention collapse and induce a U-shaped position bias.

Why It Matters

Understanding position bias in Transformers is crucial for improving model performance in NLP tasks. This research provides insights into the architectural factors that influence attention distribution, which can inform future model designs and applications in AI.

Key Takeaways

  • Transformers exhibit a systematic position bias favoring certain token positions.
  • Residual connections play a critical role in preventing attention collapse in deep models.
  • The study identifies a U-shaped position bias, concentrating attention on early and late tokens.
  • This research addresses the 'Lost-in-the-Middle' phenomenon in Transformer models.
  • The findings can guide future architectural improvements in NLP models.

Computer Science > Machine Learning arXiv:2602.16837 (cs) [Submitted on 18 Feb 2026] Title:A Residual-Aware Theory of Position Bias in Transformers Authors:Hanna Herasimchyk, Robin Labryga, Tomislav Prusina, Sören Laue View a PDF of the paper titled A Residual-Aware Theory of Position Bias in Transformers, by Hanna Herasimchyk and 3 other authors View PDF HTML (experimental) Abstract:Transformer models systematically favor certain token positions, yet the architectural origins of this position bias remain poorly understood. Under causal masking at infinite depth, prior theoretical analyses of attention rollout predict an inevitable collapse of attention onto the first token. Such collapse, however, does not occur in practice. We resolve this discrepancy with a residual-aware theory of cumulative attention rollout. By incorporating residual connections, we show that this architectural component prevents collapse under realistic conditions. At finite depth, we prove that causal Transformers induce a U-shaped position bias, with attention concentrating on early and late tokens. This result provides a principled architectural explanation for the Lost-in-the-Middle phenomenon. Subjects: Machine Learning (cs.LG) Cite as: arXiv:2602.16837 [cs.LG]   (or arXiv:2602.16837v1 [cs.LG] for this version)   https://doi.org/10.48550/arXiv.2602.16837 Focus to learn more arXiv-issued DOI via DataCite (pending registration) Submission history From: Soeren Laue [view email] [v1] Wed, 18 Feb 20...

Related Articles

Machine Learning

[D] ICML reviewer making up false claim in acknowledgement, what to do?

In a rebuttal acknowledgement we received, the reviewer made up a claim that our method performs worse than baselines with some hyperpara...

Reddit - Machine Learning · 1 min ·
UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Machine Learning

[D] Budget Machine Learning Hardware

Looking to get into machine learning and found this video on a piece of hardware for less than £500. Is it really possible to teach auton...

Reddit - Machine Learning · 1 min ·
Machine Learning

Your prompts aren’t the problem — something else is

I keep seeing people focus heavily on prompt optimization. But in practice, a lot of failures I’ve observed don’t come from the prompt it...

Reddit - Artificial Intelligence · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime