[2507.14237] U-DREAM: Unsupervised Dereverberation guided by a Reverberation Model
About this article
Abstract page for arXiv paper 2507.14237: U-DREAM: Unsupervised Dereverberation guided by a Reverberation Model
Computer Science > Sound arXiv:2507.14237 (cs) [Submitted on 17 Jul 2025 (v1), last revised 26 Mar 2026 (this version, v2)] Title:U-DREAM: Unsupervised Dereverberation guided by a Reverberation Model Authors:Louis Bahrman (IDS, S2A), Marius Rodrigues (IDS, S2A), Mathieu Fontaine (IDS, S2A), Gaël Richard (IDS, S2A) View a PDF of the paper titled U-DREAM: Unsupervised Dereverberation guided by a Reverberation Model, by Louis Bahrman (IDS and 7 other authors View PDF Abstract:This paper explores the outcome of training state-of-the-art dereverberation models with supervision settings ranging from weakly-supervised to virtually unsupervised, relying solely on reverberant signals and an acoustic model for training. Most of the existing deep learning approaches typically require paired dry and reverberant data, which are difficult to obtain in practice. We develop instead a sequential learning strategy motivated by a maximum-likelihood formulation of the dereverberation problem, wherein acoustic parameters and dry signals are estimated from reverberant inputs using deep neural networks, guided by a reverberation matching loss. Our most data-efficient variant requires only 100 reverberation-parameter-labeled samples to outperform an unsupervised baseline, demonstrating the effectiveness and practicality of the proposed method in low-resource scenarios. Subjects: Sound (cs.SD); Artificial Intelligence (cs.AI); Audio and Speech Processing (eess.AS); Signal Processing (eess.SP) Cite...