[2509.14858] MeanFlowSE: one-step generative speech enhancement via conditional mean flow
About this article
Abstract page for arXiv paper 2509.14858: MeanFlowSE: one-step generative speech enhancement via conditional mean flow
Computer Science > Sound arXiv:2509.14858 (cs) [Submitted on 18 Sep 2025 (v1), last revised 4 Mar 2026 (this version, v3)] Title:MeanFlowSE: one-step generative speech enhancement via conditional mean flow Authors:Duojia Li, Shenghui Lu, Hongchen Pan, Zongyi Zhan, Qingyang Hong, Lin Li View a PDF of the paper titled MeanFlowSE: one-step generative speech enhancement via conditional mean flow, by Duojia Li and 5 other authors View PDF HTML (experimental) Abstract:Multistep inference is a bottleneck for real-time generative speech enhancement because flow- and diffusion-based systems learn an instantaneous velocity field and therefore rely on iterative ordinary differential equation (ODE) solvers. We introduce MeanFlowSE, a conditional generative model that learns the average velocity over finite intervals along a trajectory. Using a Jacobian-vector product (JVP) to instantiate the MeanFlow identity, we derive a local training objective that directly supervises finite-interval displacement while remaining consistent with the instantaneous-field constraint on the diagonal. At inference, MeanFlowSE performs single-step generation via a backward-in-time displacement, removing the need for multistep solvers; an optional few-step variant offers additional refinement. On VoiceBank-DEMAND, the single-step model achieves strong intelligibility, fidelity, and perceptual quality with substantially lower computational cost than multistep baselines. The method requires no knowledge dist...