[2603.26246] Distilling Conversations: Abstract Compression of Conversational Audio Context for LLM-based ASR
About this article
Abstract page for arXiv paper 2603.26246: Distilling Conversations: Abstract Compression of Conversational Audio Context for LLM-based ASR
Computer Science > Computation and Language arXiv:2603.26246 (cs) [Submitted on 27 Mar 2026] Title:Distilling Conversations: Abstract Compression of Conversational Audio Context for LLM-based ASR Authors:Shashi Kumar, Esaú Villatoro-Tello, Sergio Burdisso, Kadri Hacioglu, Thibault Bañeras-Roux, Hasindri Watawana, Dairazalia Sanchez-Cortes, Srikanth Madikeri, Petr Motlicek, Andreas Stolcke View a PDF of the paper titled Distilling Conversations: Abstract Compression of Conversational Audio Context for LLM-based ASR, by Shashi Kumar and 9 other authors View PDF HTML (experimental) Abstract:Standard LLM-based speech recognition systems typically process utterances in isolation, limiting their ability to leverage conversational context. In this work, we study whether multimodal context from prior turns improves LLM-based ASR and how to represent that context efficiently. We find that, after supervised multi-turn training, conversational context mainly helps with the recognition of contextual entities. However, conditioning on raw context is expensive because the prior-turn audio token sequence grows rapidly with conversation length. To address this, we propose Abstract Compression, which replaces the audio portion of prior turns with a fixed number of learned latent tokens while retaining corresponding transcripts explicitly. On both in-domain and out-of-domain test sets, the compressed model recovers part of the gains of raw-context conditioning with a smaller prior-turn audi...