[2602.13253] Implicit Bias in LLMs for Transgender Populations

[2602.13253] Implicit Bias in LLMs for Transgender Populations

arXiv - AI 4 min read Article

Summary

This article examines implicit biases in large language models (LLMs) against transgender populations, highlighting disparities in healthcare decision-making and the need for equitable AI applications.

Why It Matters

Understanding implicit bias in LLMs is crucial for ensuring fair treatment of marginalized groups, particularly in sensitive areas like healthcare. This research sheds light on how biases can affect decision-making processes, emphasizing the importance of addressing these issues to promote equity in AI applications.

Key Takeaways

  • LLMs exhibit implicit biases against transgender individuals, favoring cisgender associations.
  • Biases manifest in healthcare decision-making, affecting appointment allocations.
  • Transgender candidates are often preferred for STI and mental health services, while cisgender candidates are favored in gynecology.
  • The study highlights the need for ongoing research to mitigate stereotype-driven biases in AI.
  • Addressing these biases is essential for equitable treatment in AI applications.

Computer Science > Computers and Society arXiv:2602.13253 (cs) [Submitted on 2 Feb 2026] Title:Implicit Bias in LLMs for Transgender Populations Authors:Micaela Hirsch, Marina Elichiry, Blas Radi, Tamara Quiroga, David Restrepo, Luciana Benotti, Veronica Xhardez, Jocelyn Dunstan, Enzo Ferrante View a PDF of the paper titled Implicit Bias in LLMs for Transgender Populations, by Micaela Hirsch and 8 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) have been shown to exhibit biases against LGBTQ+ populations. While safety training may lessen explicit expressions of bias, previous work has shown that implicit stereotype-driven associations often persist. In this work, we examine implicit bias toward transgender people in two main scenarios. First, we adapt word association tests to measure whether LLMs disproportionately pair negative concepts with "transgender" and positive concepts with "cisgender". Second, acknowledging the well-documented systemic challenges that transgender people encounter in real-world healthcare settings, we examine implicit biases that may emerge when LLMs are applied to healthcare decision-making. To this end, we design a healthcare appointment allocation task where models act as scheduling agents choosing between cisgender and transgender candidates across medical specialties prone to stereotyping. We evaluate seven LLMs in English and Spanish. Our results show consistent bias in categories such as appearance, risk, a...

Related Articles

Anthropic Teams Up With Its Rivals to Keep AI From Hacking Everything | WIRED
Llms

Anthropic Teams Up With Its Rivals to Keep AI From Hacking Everything | WIRED

The AI lab's Project Glasswing will bring together Apple, Google, and more than 45 other organizations. They'll use the new Claude Mythos...

Wired - AI · 7 min ·
Llms

The public needs to control AI-run infrastructure, labor, education, and governance— NOT private actors

A lot of discussion around AI is becoming siloed, and I think that is dangerous. People in AI-focused spaces often talk as if the only qu...

Reddit - Artificial Intelligence · 1 min ·
Llms

Agents that write their own code at runtime and vote on capabilities, no human in the loop

hollowOS just hit v4.4 and I added something that I haven’t seen anyone else do. Previous versions gave you an OS for agents: structured ...

Reddit - Artificial Intelligence · 1 min ·
Google Maps can now write captions for your photos using AI | TechCrunch
Llms

Google Maps can now write captions for your photos using AI | TechCrunch

Gemini can now create captions when users are looking to share a photo or video.

TechCrunch - AI · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime