[2504.03889] Identifying and Evaluating Inactive Heads in Pretrained LLMs
About this article
Abstract page for arXiv paper 2504.03889: Identifying and Evaluating Inactive Heads in Pretrained LLMs
Computer Science > Machine Learning arXiv:2504.03889 (cs) [Submitted on 4 Apr 2025 (v1), last revised 1 Mar 2026 (this version, v4)] Title:Identifying and Evaluating Inactive Heads in Pretrained LLMs Authors:Pedro Sandoval-Segura, Xijun Wang, Ashwinee Panda, Micah Goldblum, Ronen Basri, Tom Goldstein, David Jacobs View a PDF of the paper titled Identifying and Evaluating Inactive Heads in Pretrained LLMs, by Pedro Sandoval-Segura and 6 other authors View PDF HTML (experimental) Abstract:Attention is foundational to large language models (LLMs), enabling different heads to have diverse focus on relevant input tokens. However, learned behaviors like attention sinks, where the first token receives the most attention despite limited semantic importance, suggest some heads may be inactive, and point to a significant source of computational redundancy. To analyze this phenomenon, we evaluate 12 score functions that measure different ways a head can be inactive. Thresholding these scores allows us to analyze different sets of potentially inactive attention heads. We evaluate whether identified heads are inactive through model interventions, finding that more than 12% of attention heads are inactive on average, and can be ablated in specific contexts while maintaining MMLU accuracy to within 1% of the pretrained LLM. Across 3 model families, our score functions that measure the average norm of a head's output consistently identify inactive heads that would not have been found by s...