[2602.18459] From Bias Mitigation to Bias Negotiation: Governing Identity and Sociocultural Reasoning in Generative AI

[2602.18459] From Bias Mitigation to Bias Negotiation: Governing Identity and Sociocultural Reasoning in Generative AI

arXiv - AI 4 min read Article

Summary

This article discusses the shift from bias mitigation to bias negotiation in generative AI, emphasizing the need for ethical governance of identity in AI systems.

Why It Matters

Understanding bias negotiation is crucial for developing AI systems that not only mitigate identity-related harms but also recognize the positive role of sociocultural reasoning. This approach can enhance model functionality and promote justice in AI applications across diverse cultural contexts.

Key Takeaways

  • Bias negotiation offers a framework for regulating identity in AI systems beyond mere mitigation.
  • The study identifies key strategies for negotiating identity in AI, including probabilistic framing and harm-value balancing.
  • A positive role for sociocultural reasoning is essential for addressing structural inequities in AI applications.
  • Bias negotiation requires dynamic evaluation methods rather than static benchmarks.
  • The proposed framework aids in systematic test-suite design for assessing bias negotiation capabilities.

Computer Science > Computers and Society arXiv:2602.18459 (cs) [Submitted on 5 Feb 2026] Title:From Bias Mitigation to Bias Negotiation: Governing Identity and Sociocultural Reasoning in Generative AI Authors:Zackary Okun Dunivin, Bingyi Han, John Bollenbocher View a PDF of the paper titled From Bias Mitigation to Bias Negotiation: Governing Identity and Sociocultural Reasoning in Generative AI, by Zackary Okun Dunivin and 2 other authors View PDF HTML (experimental) Abstract:LLMs act in the social world by drawing upon shared cultural patterns to make social situations understandable and actionable. Because identity is often part of the inferential substrate of competent judgment, ethical alignment requires regulating when and how systems invoke identity. Yet the dominant governance regime for identity-related harm remains bias mitigation, which treats identity primarily as a source of measurable disparities or harmful associations to be detected and suppressed. This leaves underspecified a positive, context-sensitive role for identity in interpretation. We call this governance problem bias negotiation: the normative regulation of identity-conditioned judgments of sociocultural relevance, inference, and justification. Empirically, we probe the feasibility of bias negotiation through semi-structured interviews with multiple publicly deployed chatbots. We identify recurring repertoires for negotiating identity including probabilistic framing of group tendencies and harm-val...

Related Articles

Llms

[P] Remote sensing foundation models made easy to use.

This project enables the idea of tasking remote sensing models to acquire embeddings like we task satellites to acquire data! https://git...

Reddit - Machine Learning · 1 min ·
Llms

I stopped using Claude like a chatbot — 7 prompt shifts that reclaimed 10 hours of my week

submitted by /u/ThereWas [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
Llms

What features do you actually want in an AI chatbot that nobody has built yet?

Hey everyone 👋 I'm building a new AI chat app and before I build anything I want to hear from real users first. Current AI tools like Cha...

Reddit - Artificial Intelligence · 1 min ·
Llms

So, what exactly is going on with the Claude usage limits?

I'm extremely new to AI and am building a local agent for fun. I purchased a Claude Pro account because it helped me a lot in the past wh...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime