[2602.23649] AudioCapBench: Quick Evaluation on Audio Captioning across Sound, Music, and Speech
About this article
Abstract page for arXiv paper 2602.23649: AudioCapBench: Quick Evaluation on Audio Captioning across Sound, Music, and Speech
Computer Science > Sound arXiv:2602.23649 (cs) [Submitted on 27 Feb 2026] Title:AudioCapBench: Quick Evaluation on Audio Captioning across Sound, Music, and Speech Authors:Jielin Qiu, Jianguo Zhang, Zixiang Chen, Liangwei Yang, Ming Zhu, Juntao Tan, Haolin Chen, Wenting Zhao, Rithesh Murthy, Roshan Ram, Akshara Prabhakar, Shelby Heinecke, Caiming, Xiong, Silvio Savarese, Huan Wang View a PDF of the paper titled AudioCapBench: Quick Evaluation on Audio Captioning across Sound, Music, and Speech, by Jielin Qiu and 15 other authors View PDF HTML (experimental) Abstract:We introduce AudioCapBench, a benchmark for evaluating audio captioning capabilities of large multimodal models. \method covers three distinct audio domains, including environmental sound, music, and speech, with 1,000 curated evaluation samples drawn from established datasets. We evaluate 13 models across two providers (OpenAI, Google Gemini) using both reference-based metrics (METEOR, BLEU, ROUGE-L) and an LLM-as-Judge framework that scores predictions on three orthogonal dimensions: \textit{accuracy} (semantic correctness), \textit{completeness} (coverage of reference content), and \textit{hallucination} (absence of fabricated content). Our results reveal that Gemini models generally outperform OpenAI models on overall captioning quality, with Gemini~3~Pro achieving the highest overall score (6.00/10), while OpenAI models exhibit lower hallucination rates. All models perform best on speech captioning and wor...