[2509.23435] AudioRole: An Audio Dataset for Character Role-Playing in Large Language Models
About this article
Abstract page for arXiv paper 2509.23435: AudioRole: An Audio Dataset for Character Role-Playing in Large Language Models
Computer Science > Sound arXiv:2509.23435 (cs) [Submitted on 27 Sep 2025 (v1), last revised 8 Apr 2026 (this version, v2)] Title:AudioRole: An Audio Dataset for Character Role-Playing in Large Language Models Authors:Wenyu Li, Xiaoqi Jiao, Yi Chang, Guangyan Zhang, Yiwen Guo View a PDF of the paper titled AudioRole: An Audio Dataset for Character Role-Playing in Large Language Models, by Wenyu Li and 4 other authors View PDF HTML (experimental) Abstract:The creation of high-quality multimodal datasets remains fundamental for advancing role-playing capabilities in large language models (LLMs). While existing works predominantly focus on text-based persona simulation, Audio Role-Playing (ARP) presents unique challenges due to the need for synchronized alignment of semantic content and vocal characteristics. To address this gap, we propose AudioRole, a meticulously curated dataset from 13 TV series spanning 1K+ hours with 1M+ character-grounded dialogues, providing synchronized audio-text pairs annotated with speaker identities and contextual metadata. In addition, to demonstrate the effectiveness of the dataset, we introduced ARP-Eval, a dual-aspect evaluation framework that assesses both response quality and role fidelity. Empirical validation showing GLM-4-Voice trained on AudioRole (which we called ARP-Model) achieve an average Acoustic Personalization score of 0.31, significantly outperforming the original GLM-4-voice and the more powerful model MiniCPM-O-2.6, which spec...