[2603.04366] Low-Resource Guidance for Controllable Latent Audio Diffusion
About this article
Abstract page for arXiv paper 2603.04366: Low-Resource Guidance for Controllable Latent Audio Diffusion
Computer Science > Sound arXiv:2603.04366 (cs) [Submitted on 4 Mar 2026] Title:Low-Resource Guidance for Controllable Latent Audio Diffusion Authors:Zachary Novack, Zack Zukowski, CJ Carr, Julian Parker, Zach Evans, Josiah Taylor, Taylor Berg-Kirkpatrick, Julian McAuley, Jordi Pons View a PDF of the paper titled Low-Resource Guidance for Controllable Latent Audio Diffusion, by Zachary Novack and 8 other authors View PDF HTML (experimental) Abstract:Generative audio requires fine-grained controllable outputs, yet most existing methods require model retraining on specific controls or inference-time controls (\textit{e.g.}, guidance) that can also be computationally demanding. By examining the bottlenecks of existing guidance-based controls, in particular their high cost-per-step due to decoder backpropagation, we introduce a guidance-based approach through selective TFG and Latent-Control Heads (LatCHs), which enables controlling latent audio diffusion models with low computational overhead. LatCHs operate directly in latent space, avoiding the expensive decoder step, and requiring minimal training resources (7M parameters and $\approx$ 4 hours of training). Experiments with Stable Audio Open demonstrate effective control over intensity, pitch, and beats (and a combination of those) while maintaining generation quality. Our method balances precision and audio fidelity with far lower computational costs than standard end-to-end guidance. Demo examples can be found at this htt...