[2511.00177] Can SAEs reveal and mitigate racial biases of LLMs in healthcare?
About this article
Abstract page for arXiv paper 2511.00177: Can SAEs reveal and mitigate racial biases of LLMs in healthcare?
Computer Science > Machine Learning arXiv:2511.00177 (cs) [Submitted on 31 Oct 2025 (v1), last revised 28 Feb 2026 (this version, v2)] Title:Can SAEs reveal and mitigate racial biases of LLMs in healthcare? Authors:Hiba Ahsan, Byron C. Wallace View a PDF of the paper titled Can SAEs reveal and mitigate racial biases of LLMs in healthcare?, by Hiba Ahsan and Byron C. Wallace View PDF HTML (experimental) Abstract:LLMs are increasingly being used in healthcare. This promises to free physicians from drudgery, enabling better care to be delivered at scale. But the use of LLMs in this space also brings risks; for example, such models may worsen existing biases. How can we spot when LLMs are (spuriously) relying on patient race to inform predictions? In this work we assess the degree to which Sparse Autoencoders (SAEs) can reveal (and control) associations the model has made between race and stigmatizing concepts. We first identify SAE latents in Gemma-2 models which appear to correlate with Black individuals. We find that this latent activates on reasonable input sequences (e.g., "African American") but also problematic words like "incarceration". We then show that we can use this latent to steer models to generate outputs about Black patients, and further that this can induce problematic associations in model outputs as a result. For example, activating the Black latent increases the risk assigned to the probability that a patient will become "belligerent". We evaluate the degr...