[2509.17956] "I think this is fair": Uncovering the Complexities of Stakeholder Decision-Making in AI Fairness Assessment
Summary
This article explores how non-expert stakeholders assess fairness in AI decision-making, revealing complexities that extend beyond traditional expert practices.
Why It Matters
Understanding how stakeholders perceive fairness in AI is crucial for developing inclusive and effective AI governance frameworks. This research highlights the need for diverse input in fairness assessments, which can lead to more equitable AI systems and better outcomes for affected communities.
Key Takeaways
- Stakeholders consider a broader range of features than legally protected ones when assessing fairness.
- Non-expert stakeholders prefer tailored metrics and stricter thresholds for fairness evaluation.
- Incorporating stakeholder perspectives can enhance AI fairness governance.
Computer Science > Artificial Intelligence arXiv:2509.17956 (cs) [Submitted on 22 Sep 2025 (v1), last revised 26 Feb 2026 (this version, v2)] Title:"I think this is fair": Uncovering the Complexities of Stakeholder Decision-Making in AI Fairness Assessment Authors:Lin Luo, Yuri Nakao, Mathieu Chollet, Hiroya Inakoshi, Simone Stumpf View a PDF of the paper titled "I think this is fair": Uncovering the Complexities of Stakeholder Decision-Making in AI Fairness Assessment, by Lin Luo and 4 other authors View PDF HTML (experimental) Abstract:Assessing fairness in artificial intelligence (AI) typically involves AI experts who select protected features, fairness metrics, and set fairness thresholds to assess outcome fairness. However, little is known about how stakeholders, particularly those affected by AI outcomes but lacking AI expertise, assess fairness. To address this gap, we conducted a qualitative study with 26 stakeholders without AI expertise, representing potential decision subjects in a credit rating scenario, to examine how they assess fairness when placed in the role of deciding on features with priority, metrics, and thresholds. We reveal that stakeholders' fairness decisions are more complex than typical AI expert practices: they considered features far beyond legally protected features, tailored metrics for specific contexts, set diverse yet stricter fairness thresholds, and even preferred designing customized fairness. Our results extend the understanding of ho...