[2404.08634] When Attention Collapses: How Degenerate Layers in LLMs Enable Smaller, Stronger Models
Summary
This article explores the phenomenon of 'attention collapse' in large language models (LLMs) and introduces Inheritune, a method for creating smaller, more efficient models that maintain high performance.
Why It Matters
Understanding the inefficiencies in LLMs and how to address them is crucial for advancing AI technology. The proposed Inheritune method could lead to more accessible and efficient models, reducing computational costs and environmental impact.
Key Takeaways
- Attention collapse in LLMs leads to structural inefficiencies.
- Inheritune allows for the creation of smaller models that outperform larger counterparts.
- The method leverages early layers from pre-trained models for enhanced performance.
- This research paves the way for more efficient model compression techniques.
- The findings could significantly impact the development of future AI models.
Computer Science > Computation and Language arXiv:2404.08634 (cs) [Submitted on 12 Apr 2024 (v1), last revised 16 Feb 2026 (this version, v4)] Title:When Attention Collapses: How Degenerate Layers in LLMs Enable Smaller, Stronger Models Authors:Sunny Sanyal, Ravid Shwartz-Ziv, Alexandros G. Dimakis, Sujay Sanghavi View a PDF of the paper titled When Attention Collapses: How Degenerate Layers in LLMs Enable Smaller, Stronger Models, by Sunny Sanyal and 3 other authors View PDF HTML (experimental) Abstract:Large Language Models (LLMs) are known for their performance, but we uncover a significant structural inefficiency: a phenomenon we term attention collapse. In many pre-trained decoder-style LLMs, the attention matrices in deeper layers degenerate, collapsing to near rank-one structures. These underutilized layers, which we call lazy layers, are redundant and impair model efficiency. To address this, we introduce Inheritune, a simple yet powerful training recipe designed to build smaller, stronger language models. Inheritune initializes a compact model by inheriting the potent early layers from a larger pre-trained model and then progressively trains and expands it. Our experiments on various models, including the GPT-2 family, demonstrate that models trained with Inheritune can match or even surpass the performance of their larger counterparts, despite having significantly fewer layers. This work presents a novel path toward model compression by design, enabling the creat...