[2404.01877] Procedural Fairness in Machine Learning
Summary
This paper explores procedural fairness in machine learning, proposing a new metric for evaluation and methods to enhance fairness without significantly sacrificing model performance.
Why It Matters
As machine learning systems increasingly impact decision-making, understanding and improving procedural fairness is crucial for ethical AI deployment. This research addresses a gap in existing literature by focusing on procedural fairness, which complements distributive fairness, ensuring that AI systems are not only effective but also equitable.
Key Takeaways
- Defines procedural fairness in ML, drawing from philosophy and psychology.
- Introduces a new metric, GPF_FAE, to evaluate group procedural fairness.
- Demonstrates the relationship between procedural and distributive fairness.
- Proposes methods to identify and improve features leading to procedural unfairness.
- Shows that improvements in procedural fairness can also enhance distributive fairness.
Computer Science > Machine Learning arXiv:2404.01877 (cs) [Submitted on 2 Apr 2024 (v1), last revised 26 Feb 2026 (this version, v2)] Title:Procedural Fairness in Machine Learning Authors:Ziming Wang, Changwu Huang, Ke Tang, Xin Yao View a PDF of the paper titled Procedural Fairness in Machine Learning, by Ziming Wang and 3 other authors View PDF HTML (experimental) Abstract:Fairness in machine learning (ML) has garnered significant attention. However, current research has mainly concentrated on the distributive fairness of ML models, with limited focus on another dimension of fairness, i.e., procedural fairness. In this paper, we first define the procedural fairness of ML models by drawing from the established understanding of procedural fairness in philosophy and psychology fields, and then give formal definitions of individual and group procedural fairness. Based on the proposed definition, we further propose a novel metric to evaluate the group procedural fairness of ML models, called $GPF_{FAE}$, which utilizes a widely used explainable artificial intelligence technique, namely feature attribution explanation (FAE), to capture the decision process of ML models. We validate the effectiveness of $GPF_{FAE}$ on a synthetic dataset and eight real-world datasets. Our experimental studies have revealed the relationship between procedural and distributive fairness of ML models. After validating the proposed metric for assessing the procedural fairness of ML models, we then p...