[2511.16849] Better audio representations are more brain-like: linking model-brain alignment with performance in downstream auditory tasks
About this article
Abstract page for arXiv paper 2511.16849: Better audio representations are more brain-like: linking model-brain alignment with performance in downstream auditory tasks
Computer Science > Machine Learning arXiv:2511.16849 (cs) [Submitted on 20 Nov 2025 (v1), last revised 4 Mar 2026 (this version, v2)] Title:Better audio representations are more brain-like: linking model-brain alignment with performance in downstream auditory tasks Authors:Leonardo Pepino, Pablo Riera, Juan Kamienkowski, Luciana Ferrer View a PDF of the paper titled Better audio representations are more brain-like: linking model-brain alignment with performance in downstream auditory tasks, by Leonardo Pepino and 3 other authors View PDF HTML (experimental) Abstract:Artificial neural networks are increasingly powerful models of brain computation, yet it remains unclear whether improving their performance in downstream tasks also makes their internal representations more similar to brain signals. To address this question in the auditory domain, we quantified the alignment between the internal representations of 36 different audio models and brain activity from two independent fMRI datasets. Using voxel-wise and component-wise regression, and representation similarity analysis, we found that recent self-supervised audio models with strong performance in diverse downstream tasks are better predictors of auditory cortex activity than previously studied models. To assess the quality of the audio representations, we evaluated these models in 6 auditory tasks from the HEAREval benchmark, spanning music, speech, and environmental sounds. This revealed strong positive Pearson corre...