[2603.27557] A General Model for Deepfake Speech Detection: Diverse Bonafide Resources or Diverse AI-Based Generators
About this article
Abstract page for arXiv paper 2603.27557: A General Model for Deepfake Speech Detection: Diverse Bonafide Resources or Diverse AI-Based Generators
Computer Science > Sound arXiv:2603.27557 (cs) [Submitted on 29 Mar 2026] Title:A General Model for Deepfake Speech Detection: Diverse Bonafide Resources or Diverse AI-Based Generators Authors:Lam Pham, Khoi Vu, Dat Tran, David Fischinger, Simon Freitter, Marcel Hasenbalg, Davide Antonutti, Alexander Schindler, Martin Boyer, Ian McLoughlin View a PDF of the paper titled A General Model for Deepfake Speech Detection: Diverse Bonafide Resources or Diverse AI-Based Generators, by Lam Pham and 8 other authors View PDF HTML (experimental) Abstract:In this paper, we analyze two main factors of Bonafide Resource (BR) or AI-based Generator (AG) which affect the performance and the generality of a Deepfake Speech Detection (DSD) model. To this end, we first propose a deep-learning based model, referred to as the baseline. Then, we conducted experiments on the baseline by which we indicate how Bonafide Resource (BR) and AI-based Generator (AG) factors affect the threshold score used to detect fake or bonafide input audio in the inference process. Given the experimental results, a dataset, which re-uses public Deepfake Speech Detection (DSD) datasets and shows a balance between Bonafide Resource (BR) or AI-based Generator (AG), is proposed. We then train various deep-learning based models on the proposed dataset and conduct cross-dataset evaluation on different benchmark datasets. The cross-dataset evaluation results prove that the balance of Bonafide Resources (BR) and AI-based Gene...