[2603.26098] A Human-Inspired Decoupled Architecture for Efficient Audio Representation Learning
About this article
Abstract page for arXiv paper 2603.26098: A Human-Inspired Decoupled Architecture for Efficient Audio Representation Learning
Computer Science > Sound arXiv:2603.26098 (cs) [Submitted on 27 Mar 2026] Title:A Human-Inspired Decoupled Architecture for Efficient Audio Representation Learning Authors:Harunori Kawano, Takeshi Sasaki View a PDF of the paper titled A Human-Inspired Decoupled Architecture for Efficient Audio Representation Learning, by Harunori Kawano and Takeshi Sasaki View PDF HTML (experimental) Abstract:While self-supervised learning (SSL) has revolutionized audio representation, the excessive parameterization and quadratic computational cost of standard Transformers limit their deployment on resource-constrained devices. To address this bottleneck, we propose HEAR (Human-inspired Efficient Audio Representation), a novel decoupled architecture. Inspired by the human cognitive ability to isolate local acoustic features from global context, HEAR splits the processing pipeline into two dedicated modules: an Acoustic Model for local feature extraction and a Task Model for global semantic integration. Coupled with an Acoustic Tokenizer trained via knowledge distillation, our approach enables robust Masked Audio Modeling (MAM). Extensive experiments demonstrate that HEAR requires only 15M parameters and 9.47 GFLOPs for inference, operating at a fraction of the computational cost of conventional foundation models (which typically require 85M-94M parameters). Despite this high efficiency, HEAR achieves highly competitive performance across diverse audio classification benchmarks. The code an...