[2602.16740] Quantifying LLM Attention-Head Stability: Implications for Circuit Universality
Summary
This article examines the stability of attention heads in transformer models, revealing insights into their representational robustness and implications for AI safety.
Why It Matters
Understanding the stability of attention heads in large language models (LLMs) is crucial for ensuring reliable AI systems. This research highlights the need for robust mechanisms in AI oversight, particularly in safety-critical applications, where consistent behavior across different model instances is essential.
Key Takeaways
- Middle-layer attention heads are the least stable but most distinct in representation.
- Deeper transformer models show greater mid-depth divergence.
- Unstable heads in deeper layers gain functional importance.
- Weight decay optimization enhances attention-head stability.
- The residual stream exhibits comparatively high stability.
Computer Science > Machine Learning arXiv:2602.16740 (cs) [Submitted on 17 Feb 2026] Title:Quantifying LLM Attention-Head Stability: Implications for Circuit Universality Authors:Karan Bali, Jack Stanley, Praneet Suresh, Danilo Bzdok View a PDF of the paper titled Quantifying LLM Attention-Head Stability: Implications for Circuit Universality, by Karan Bali and 3 other authors View PDF HTML (experimental) Abstract:In mechanistic interpretability, recent work scrutinizes transformer "circuits" - sparse, mono or multi layer sub computations, that may reflect human understandable functions. Yet, these network circuits are rarely acid-tested for their stability across different instances of the same deep learning architecture. Without this, it remains unclear whether reported circuits emerge universally across labs or turn out to be idiosyncratic to a particular estimation instance, potentially limiting confidence in safety-critical settings. Here, we systematically study stability across-refits in increasingly complex transformer language models of various sizes. We quantify, layer by layer, how similarly attention heads learn representations across independently initialized training runs. Our rigorous experiments show that (1) middle-layer heads are the least stable yet the most representationally distinct; (2) deeper models exhibit stronger mid-depth divergence; (3) unstable heads in deeper layers become more functionally important than their peers from the same layer; (4) ...