[2411.10636] Mitigating Extrinsic Gender Bias for Bangla Classification Tasks
About this article
Abstract page for arXiv paper 2411.10636: Mitigating Extrinsic Gender Bias for Bangla Classification Tasks
Computer Science > Computation and Language arXiv:2411.10636 (cs) [Submitted on 16 Nov 2024 (v1), last revised 9 Apr 2026 (this version, v2)] Title:Mitigating Extrinsic Gender Bias for Bangla Classification Tasks Authors:Sajib Kumar Saha Joy, Arman Hassan Mahy, Meherin Sultana, Azizah Mamun Abha, MD Piyal Ahmmed, Yue Dong, G M Shahariar View a PDF of the paper titled Mitigating Extrinsic Gender Bias for Bangla Classification Tasks, by Sajib Kumar Saha Joy and 6 other authors View PDF HTML (experimental) Abstract:In this study, we investigate extrinsic gender bias in Bangla pretrained language models, a largely underexplored area in low-resource languages. To assess this bias, we construct four manually annotated, task-specific benchmark datasets for sentiment analysis, toxicity detection, hate speech detection, and sarcasm detection. Each dataset is augmented using nuanced gender perturbations, where we systematically swap gendered names and terms while preserving semantic content, enabling minimal-pair evaluation of gender-driven prediction shifts. We then propose RandSymKL, a randomized debiasing strategy integrated with symmetric KL divergence and cross-entropy loss to mitigate the bias across task-specific pretrained models. RandSymKL is a refined training approach to integrate these elements in a unified way for extrinsic gender bias mitigation focused on classification tasks. Our approach was evaluated against existing bias mitigation methods, with results showing th...