[2602.24014] Interpretable Debiasing of Vision-Language Models for Social Fairness
About this article
Abstract page for arXiv paper 2602.24014: Interpretable Debiasing of Vision-Language Models for Social Fairness
Computer Science > Computer Vision and Pattern Recognition arXiv:2602.24014 (cs) [Submitted on 27 Feb 2026] Title:Interpretable Debiasing of Vision-Language Models for Social Fairness Authors:Na Min An, Yoonna Jang, Yusuke Hirota, Ryo Hachiuma, Isabelle Augenstein, Hyunjung Shim View a PDF of the paper titled Interpretable Debiasing of Vision-Language Models for Social Fairness, by Na Min An and 5 other authors View PDF HTML (experimental) Abstract:The rapid advancement of Vision-Language models (VLMs) has raised growing concerns that their black-box reasoning processes could lead to unintended forms of social bias. Current debiasing approaches focus on mitigating surface-level bias signals through post-hoc learning or test-time algorithms, while leaving the internal dynamics of the model largely unexplored. In this work, we introduce an interpretable, model-agnostic bias mitigation framework, DeBiasLens, that localizes social attribute neurons in VLMs through sparse autoencoders (SAEs) applied to multimodal encoders. Building upon the disentanglement ability of SAEs, we train them on facial image or caption datasets without corresponding social attribute labels to uncover neurons highly responsive to specific demographics, including those that are underrepresented. By selectively deactivating the social neurons most strongly tied to bias for each group, we effectively mitigate socially biased behaviors of VLMs without degrading their semantic knowledge. Our research lays ...