[2602.13253] Implicit Bias in LLMs for Transgender Populations
Summary
This article examines implicit biases in large language models (LLMs) against transgender populations, highlighting disparities in healthcare decision-making and the need for equitable AI applications.
Why It Matters
Understanding implicit bias in LLMs is crucial for ensuring fair treatment of marginalized groups, particularly in sensitive areas like healthcare. This research sheds light on how biases can affect decision-making processes, emphasizing the importance of addressing these issues to promote equity in AI applications.
Key Takeaways
- LLMs exhibit implicit biases against transgender individuals, favoring cisgender associations.
- Biases manifest in healthcare decision-making, affecting appointment allocations.
- Transgender candidates are often preferred for STI and mental health services, while cisgender candidates are favored in gynecology.
- The study highlights the need for ongoing research to mitigate stereotype-driven biases in AI.
- Addressing these biases is essential for equitable treatment in AI applications.
Computer Science > Computers and Society arXiv:2602.13253 (cs) [Submitted on 2 Feb 2026] Title:Implicit Bias in LLMs for Transgender Populations Authors:Micaela Hirsch, Marina Elichiry, Blas Radi, Tamara Quiroga, David Restrepo, Luciana Benotti, Veronica Xhardez, Jocelyn Dunstan, Enzo Ferrante View a PDF of the paper titled Implicit Bias in LLMs for Transgender Populations, by Micaela Hirsch and 8 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) have been shown to exhibit biases against LGBTQ+ populations. While safety training may lessen explicit expressions of bias, previous work has shown that implicit stereotype-driven associations often persist. In this work, we examine implicit bias toward transgender people in two main scenarios. First, we adapt word association tests to measure whether LLMs disproportionately pair negative concepts with "transgender" and positive concepts with "cisgender". Second, acknowledging the well-documented systemic challenges that transgender people encounter in real-world healthcare settings, we examine implicit biases that may emerge when LLMs are applied to healthcare decision-making. To this end, we design a healthcare appointment allocation task where models act as scheduling agents choosing between cisgender and transgender candidates across medical specialties prone to stereotyping. We evaluate seven LLMs in English and Spanish. Our results show consistent bias in categories such as appearance, risk, a...