[2505.11111] FairSHAP: Preprocessing for Fairness Through Attribution-Based Data Augmentation
Summary
FairSHAP introduces a novel preprocessing framework that utilizes Shapley value attribution to enhance fairness in machine learning models, ensuring both individual and group fairness while maintaining model accuracy.
Why It Matters
Fairness in machine learning is crucial, especially in high-stakes applications. FairSHAP addresses the limitations of existing methods by providing a transparent mechanism to identify and modify fairness-critical instances, thus improving demographic parity and equality of opportunity.
Key Takeaways
- FairSHAP leverages Shapley value attribution for fairness enhancement.
- It identifies fairness-critical instances in training data transparently.
- The method reduces discriminative risk while preserving model accuracy.
- FairSHAP achieves significant fairness gains with minimal data perturbation.
- It integrates seamlessly into existing machine learning pipelines.
Computer Science > Machine Learning arXiv:2505.11111 (cs) [Submitted on 16 May 2025 (v1), last revised 21 Feb 2026 (this version, v3)] Title:FairSHAP: Preprocessing for Fairness Through Attribution-Based Data Augmentation Authors:Lin Zhu, Yijun Bian, Lei You View a PDF of the paper titled FairSHAP: Preprocessing for Fairness Through Attribution-Based Data Augmentation, by Lin Zhu and Yijun Bian and Lei You View PDF HTML (experimental) Abstract:Ensuring fairness in machine learning models is critical, particularly in high-stakes domains where biased decisions can lead to serious societal consequences. Existing preprocessing approaches generally lack transparent mechanisms for identifying which features or instances are responsible for unfairness. This obscures the rationale behind data modifications. We introduce FairSHAP, a novel pre-processing framework that leverages Shapley value attribution to improve both individual and group fairness. FairSHAP identifies fairness-critical instances in the training data using an interpretable measure of feature importance, and systematically modifies them through instance-level matching across sensitive groups. This process reduces discriminative risk - an individual fairness metric - while preserving data integrity and model accuracy. We demonstrate that FairSHAP significantly improves demographic parity and equality of opportunity across diverse tabular datasets, achieving fairness gains with minimal data perturbation and, in some c...