[2603.05121] Measuring the Redundancy of Decoder Layers in SpeechLLMs
About this article
Abstract page for arXiv paper 2603.05121: Measuring the Redundancy of Decoder Layers in SpeechLLMs
Computer Science > Computation and Language arXiv:2603.05121 (cs) [Submitted on 5 Mar 2026] Title:Measuring the Redundancy of Decoder Layers in SpeechLLMs Authors:Adel Moumen, Guangzhi Sun, Philip C Woodland View a PDF of the paper titled Measuring the Redundancy of Decoder Layers in SpeechLLMs, by Adel Moumen and 2 other authors View PDF HTML (experimental) Abstract:Speech Large Language Models route speech encoder representations into an LLM decoder that typically accounts for over 90% of total parameters. We study how much of this decoder capacity is actually needed for speech tasks. Across two LLM families and three scales (1-8B), we show that decoder redundancy is largely inherited from the pretrained LLM: text and speech inputs yield similar redundant blocks. We then measure excess capacity by pruning decoder layers and analysing post-pruning healing to increase robustness. Our findings show that 7-8B models retain good ASR performance with only 60% of decoder layers, and the same trend extends to smaller scales with reduced pruning tolerance. We then generalise to speech translation, and show that the same blocks of layers are redundant across speech encoders, tasks and languages, indicating that a more global redundancy structure exists, enabling a single pruned and multi-tasks SpeechLLM backbone to be deployed. Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI) Cite as: arXiv:2603.05121 [cs.CL] (or arXiv:2603.05121v1 [cs.CL] for this ver...