[2603.02937] Bias and Fairness in Self-Supervised Acoustic Representations for Cognitive Impairment Detection
About this article
Abstract page for arXiv paper 2603.02937: Bias and Fairness in Self-Supervised Acoustic Representations for Cognitive Impairment Detection
Electrical Engineering and Systems Science > Audio and Speech Processing arXiv:2603.02937 (eess) [Submitted on 3 Mar 2026] Title:Bias and Fairness in Self-Supervised Acoustic Representations for Cognitive Impairment Detection Authors:Kashaf Gulzar, Korbinian Riedhammer, Elmar Nöth, Andreas K. Maier, Paula Andrea Pérez-Toro View a PDF of the paper titled Bias and Fairness in Self-Supervised Acoustic Representations for Cognitive Impairment Detection, by Kashaf Gulzar and Korbinian Riedhammer and Elmar N\"oth and Andreas K. Maier and Paula Andrea P\'erez-Toro View PDF HTML (experimental) Abstract:Speech-based detection of cognitive impairment (CI) offers a promising non-invasive approach for early diagnosis, yet performance disparities across demographic and clinical subgroups remain underexplored, raising concerns around fairness and generalizability. This study presents a systematic bias analysis of acoustic-based CI and depression classification using the DementiaBank Pitt Corpus. We compare traditional acoustic features (MFCCs, eGeMAPS) with contextualized speech embeddings from Wav2Vec 2.0 (W2V2), and evaluate classification performance across gender, age, and depression-status subgroups. For CI detection, higher-layer W2V2 embeddings outperform baseline features (UAR up to 80.6\%), but exhibit performance disparities; specifically, females and younger participants demonstrate lower discriminative power (\(AUC\): 0.769 and 0.746, respectively) and substantial specificit...