[2602.07077] CALM: Class-Conditional Sparse Attention Vectors for Large Audio-Language Models
About this article
Abstract page for arXiv paper 2602.07077: CALM: Class-Conditional Sparse Attention Vectors for Large Audio-Language Models
Computer Science > Sound arXiv:2602.07077 (cs) [Submitted on 6 Feb 2026 (v1), last revised 22 Mar 2026 (this version, v2)] Title:CALM: Class-Conditional Sparse Attention Vectors for Large Audio-Language Models Authors:Videet Mehta, Liming Wang, Hilde Kuehne, Rogerio Feris, James R. Glass, M. Jehanzeb Mirza View a PDF of the paper titled CALM: Class-Conditional Sparse Attention Vectors for Large Audio-Language Models, by Videet Mehta and 5 other authors View PDF HTML (experimental) Abstract:Large audio-language models (LALMs) exhibit strong zero-shot capabilities in multiple downstream tasks, such as audio question answering (AQA) and abstract reasoning; however, these models still lag behind specialized models for certain discriminative tasks (e.g., audio classification). Recent studies show that sparse subsets of attention heads within an LALM can serve as strong discriminative feature extractors for downstream tasks such as classification via simple voting schemes. However, these methods assign uniform weights to all selected heads, implicitly assuming that each head contributes equally across all semantic categories. In this work, we propose Class-Conditional Sparse Attention Vectors for Large Audio-Language Models, a few-shot classification method that learns class-dependent importance weights over attention heads. This formulation allows individual heads to specialize in distinct semantic categories and to contribute to ensemble predictions proportionally to their est...