[2502.05795] The Curse of Depth in Large Language Models

[2502.05795] The Curse of Depth in Large Language Models

arXiv - AI 4 min read Article

Summary

This paper introduces the 'Curse of Depth' in Large Language Models (LLMs), revealing that many deep layers are ineffective due to Pre-Layer Normalization. It proposes LayerNorm Scaling (LNS) to enhance training performance across various model sizes.

Why It Matters

Understanding the limitations of depth in LLMs is crucial for improving their training efficiency and effectiveness. The proposed LayerNorm Scaling addresses a significant issue that could lead to better-performing models, impacting various applications in AI and machine learning.

Key Takeaways

  • The 'Curse of Depth' highlights inefficiencies in LLM layers.
  • Pre-Layer Normalization contributes to ineffective deep layers.
  • LayerNorm Scaling (LNS) improves training performance significantly.
  • The findings apply across a range of model sizes from 130M to 7B.
  • LNS enhances both pre-training and supervised fine-tuning outcomes.

Computer Science > Machine Learning arXiv:2502.05795 (cs) [Submitted on 9 Feb 2025 (v1), last revised 22 Feb 2026 (this version, v5)] Title:The Curse of Depth in Large Language Models Authors:Wenfang Sun, Xinyuan Song, Pengxiang Li, Lu Yin, Yefeng Zheng, Shiwei Liu View a PDF of the paper titled The Curse of Depth in Large Language Models, by Wenfang Sun and 5 other authors View PDF HTML (experimental) Abstract:In this paper, we introduce the Curse of Depth, a concept that highlights, explains, and addresses the recent observation in modern Large Language Models (LLMs) where nearly half of the layers are less effective than expected. We first confirm the wide existence of this phenomenon across the most popular families of LLMs such as Llama, Mistral, DeepSeek, and Qwen. Our analysis, theoretically and empirically, identifies that the underlying reason for the ineffectiveness of deep layers in LLMs is the widespread usage of Pre-Layer Normalization (Pre-LN). While Pre-LN stabilizes the training of Transformer LLMs, its output variance exponentially grows with the model depth, which undesirably causes the derivative of the deep Transformer blocks to be an identity matrix, and therefore barely contributes to the training. To resolve this training pitfall, we propose LayerNorm Scaling (LNS), which scales the variance of output of the layer normalization inversely by the square root of its depth. This simple modification mitigates the output variance explosion of deeper Transf...

Related Articles

Llms

[R] GPT-5.4-mini regressed 22pp on vanilla prompting vs GPT-5-mini. Nobody noticed because benchmarks don't test this. Recursive Language Models solved it.

GPT-5.4-mini produces shorter, terser outputs by default. Vanilla accuracy dropped from 69.5% to 47.2% across 12 tasks (1,800 evals). The...

Reddit - Machine Learning · 1 min ·
Llms

built an open source CLI that auto generates AI setup files for your projects just hit 150 stars

hey everyone, been working on this side project called ai-setup and just hit a milestone i wanted to share 150 github stars, 90 PRs merge...

Reddit - Artificial Intelligence · 1 min ·
Llms

built an open source tool that auto generates AI context files for any codebase, 150 stars in

one of the most tedious parts of working with AI coding tools is having to manually write context files every single time. CLAUDE.md, .cu...

Reddit - Artificial Intelligence · 1 min ·
Find out what’s new in the Gemini app in March's Gemini Drop.
Llms

Find out what’s new in the Gemini app in March's Gemini Drop.

Gemini Drops is our regular monthly update on how to get the most out of the Gemini app.

AI Tools & Products · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime