[2509.17956] "I think this is fair": Uncovering the Complexities of Stakeholder Decision-Making in AI Fairness Assessment

[2509.17956] "I think this is fair": Uncovering the Complexities of Stakeholder Decision-Making in AI Fairness Assessment

arXiv - AI 4 min read Article

Summary

This article explores how non-expert stakeholders assess fairness in AI decision-making, revealing complexities that extend beyond traditional expert practices.

Why It Matters

Understanding how stakeholders perceive fairness in AI is crucial for developing inclusive and effective AI governance frameworks. This research highlights the need for diverse input in fairness assessments, which can lead to more equitable AI systems and better outcomes for affected communities.

Key Takeaways

  • Stakeholders consider a broader range of features than legally protected ones when assessing fairness.
  • Non-expert stakeholders prefer tailored metrics and stricter thresholds for fairness evaluation.
  • Incorporating stakeholder perspectives can enhance AI fairness governance.

Computer Science > Artificial Intelligence arXiv:2509.17956 (cs) [Submitted on 22 Sep 2025 (v1), last revised 26 Feb 2026 (this version, v2)] Title:"I think this is fair": Uncovering the Complexities of Stakeholder Decision-Making in AI Fairness Assessment Authors:Lin Luo, Yuri Nakao, Mathieu Chollet, Hiroya Inakoshi, Simone Stumpf View a PDF of the paper titled "I think this is fair": Uncovering the Complexities of Stakeholder Decision-Making in AI Fairness Assessment, by Lin Luo and 4 other authors View PDF HTML (experimental) Abstract:Assessing fairness in artificial intelligence (AI) typically involves AI experts who select protected features, fairness metrics, and set fairness thresholds to assess outcome fairness. However, little is known about how stakeholders, particularly those affected by AI outcomes but lacking AI expertise, assess fairness. To address this gap, we conducted a qualitative study with 26 stakeholders without AI expertise, representing potential decision subjects in a credit rating scenario, to examine how they assess fairness when placed in the role of deciding on features with priority, metrics, and thresholds. We reveal that stakeholders' fairness decisions are more complex than typical AI expert practices: they considered features far beyond legally protected features, tailored metrics for specific contexts, set diverse yet stricter fairness thresholds, and even preferred designing customized fairness. Our results extend the understanding of ho...

Related Articles

As more Americans adopt AI tools, fewer say they can trust the results | TechCrunch
Ai Safety

As more Americans adopt AI tools, fewer say they can trust the results | TechCrunch

AI adoption is rising in the U.S., but trust remains low, with most Americans concerned about transparency, regulation, and the technolog...

TechCrunch - AI · 6 min ·
Ai Safety

The state of AI safety in four fake graphs

submitted by /u/tekz [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
[2603.14267] DiFlowDubber: Discrete Flow Matching for Automated Video Dubbing via Cross-Modal Alignment and Synchronization
Machine Learning

[2603.14267] DiFlowDubber: Discrete Flow Matching for Automated Video Dubbing via Cross-Modal Alignment and Synchronization

Abstract page for arXiv paper 2603.14267: DiFlowDubber: Discrete Flow Matching for Automated Video Dubbing via Cross-Modal Alignment and ...

arXiv - AI · 4 min ·
[2601.22440] AI and My Values: User Perceptions of LLMs' Ability to Extract, Embody, and Explain Human Values from Casual Conversations
Llms

[2601.22440] AI and My Values: User Perceptions of LLMs' Ability to Extract, Embody, and Explain Human Values from Casual Conversations

Abstract page for arXiv paper 2601.22440: AI and My Values: User Perceptions of LLMs' Ability to Extract, Embody, and Explain Human Value...

arXiv - AI · 4 min ·
More in Ai Safety: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime