[2603.23048] MSR-HuBERT: Self-supervised Pre-training for Adaptation to Multiple Sampling Rates
About this article
Abstract page for arXiv paper 2603.23048: MSR-HuBERT: Self-supervised Pre-training for Adaptation to Multiple Sampling Rates
Computer Science > Sound arXiv:2603.23048 (cs) [Submitted on 24 Mar 2026] Title:MSR-HuBERT: Self-supervised Pre-training for Adaptation to Multiple Sampling Rates Authors:Zikang Huang, Meng Ge, Tianrui Wang, Xuanchen Li, Xiaobao Wang, Longbiao Wang, Jianwu Dang View a PDF of the paper titled MSR-HuBERT: Self-supervised Pre-training for Adaptation to Multiple Sampling Rates, by Zikang Huang and 6 other authors View PDF HTML (experimental) Abstract:Self-supervised learning (SSL) has advanced speech processing. However, existing speech SSL methods typically assume a single sampling rate and struggle with mixed-rate data due to temporal resolution mismatch. To address this limitation, we propose MSRHuBERT, a multi-sampling-rate adaptive pre-training method. Building on HuBERT, we replace its single-rate downsampling CNN with a multi-sampling-rate adaptive downsampling CNN that maps raw waveforms from different sampling rates to a shared temporal resolution without resampling. This design enables unified mixed-rate pre-training and fine-tuning. In experiments spanning 16 to 48 kHz, MSRHuBERT outperforms HuBERT on speech recognition and full-band speech reconstruction, preserving high-frequency detail while modeling low-frequency semantic structure. Moreover, MSRHuBERT retains HuBERT's mask-prediction objective and Transformer encoder, so existing analyses and improvements that were developed for HuBERT can apply directly. Subjects: Sound (cs.SD); Artificial Intelligence (cs.AI)...