[2404.01877] Procedural Fairness in Machine Learning

[2404.01877] Procedural Fairness in Machine Learning

arXiv - Machine Learning 4 min read Article

Summary

This paper explores procedural fairness in machine learning, proposing a new metric for evaluation and methods to enhance fairness without significantly sacrificing model performance.

Why It Matters

As machine learning systems increasingly impact decision-making, understanding and improving procedural fairness is crucial for ethical AI deployment. This research addresses a gap in existing literature by focusing on procedural fairness, which complements distributive fairness, ensuring that AI systems are not only effective but also equitable.

Key Takeaways

  • Defines procedural fairness in ML, drawing from philosophy and psychology.
  • Introduces a new metric, GPF_FAE, to evaluate group procedural fairness.
  • Demonstrates the relationship between procedural and distributive fairness.
  • Proposes methods to identify and improve features leading to procedural unfairness.
  • Shows that improvements in procedural fairness can also enhance distributive fairness.

Computer Science > Machine Learning arXiv:2404.01877 (cs) [Submitted on 2 Apr 2024 (v1), last revised 26 Feb 2026 (this version, v2)] Title:Procedural Fairness in Machine Learning Authors:Ziming Wang, Changwu Huang, Ke Tang, Xin Yao View a PDF of the paper titled Procedural Fairness in Machine Learning, by Ziming Wang and 3 other authors View PDF HTML (experimental) Abstract:Fairness in machine learning (ML) has garnered significant attention. However, current research has mainly concentrated on the distributive fairness of ML models, with limited focus on another dimension of fairness, i.e., procedural fairness. In this paper, we first define the procedural fairness of ML models by drawing from the established understanding of procedural fairness in philosophy and psychology fields, and then give formal definitions of individual and group procedural fairness. Based on the proposed definition, we further propose a novel metric to evaluate the group procedural fairness of ML models, called $GPF_{FAE}$, which utilizes a widely used explainable artificial intelligence technique, namely feature attribution explanation (FAE), to capture the decision process of ML models. We validate the effectiveness of $GPF_{FAE}$ on a synthetic dataset and eight real-world datasets. Our experimental studies have revealed the relationship between procedural and distributive fairness of ML models. After validating the proposed metric for assessing the procedural fairness of ML models, we then p...

Related Articles

Hub Group Using AI, Machine Learning for Real-Time Visibility of Shipments
Machine Learning

Hub Group Using AI, Machine Learning for Real-Time Visibility of Shipments

AI Events · 4 min ·
Llms

Von Hammerstein’s Ghost: What a Prussian General’s Officer Typology Can Teach Us About AI Misalignment

Greetings all - I've posted mostly in r/claudecode and r/aigamedev a couple of times previously. Working with CC for personal projects re...

Reddit - Artificial Intelligence · 1 min ·
Llms

World models will be the next big thing, bye-bye LLMs

Was at Nvidia's GTC conference recently and honestly, it was one of the most eye-opening events I've attended in a while. There was a lot...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

[D] Got my first offer after months of searching — below posted range, contract-to-hire, and worried it may pause my search. Do I take it?

I could really use some outside perspective. I’m a senior ML/CV engineer in Canada with about 5–6 years across research and industry. Mas...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime