[2505.12530] Enforcing Fair Predicted Scores on Intervals of Percentiles by Difference-of-Convex Constraints
About this article
Abstract page for arXiv paper 2505.12530: Enforcing Fair Predicted Scores on Intervals of Percentiles by Difference-of-Convex Constraints
Computer Science > Machine Learning arXiv:2505.12530 (cs) [Submitted on 18 May 2025 (v1), last revised 5 Apr 2026 (this version, v2)] Title:Enforcing Fair Predicted Scores on Intervals of Percentiles by Difference-of-Convex Constraints Authors:Yutian He, Yankun Huang, Yao Yao, Qihang Lin View a PDF of the paper titled Enforcing Fair Predicted Scores on Intervals of Percentiles by Difference-of-Convex Constraints, by Yutian He and 3 other authors View PDF HTML (experimental) Abstract:Fairness in machine learning has become a critical concern. Existing approaches often focus on achieving full fairness across all score ranges generated by predictive models, ensuring fairness in both high- and low-percentile populations. However, this stringent requirement can compromise predictive performance and may not align with the practical fairness concerns of stakeholders. In this work, we propose a novel framework for building partially fair machine learning models that enforce fairness only within a specific percentile interval of interest while maintaining flexibility in other regions. We introduce statistical metrics to evaluate partial fairness within a given percentile interval. To achieve partial fairness, we propose an in-processing method by formulating the model training problem as constrained optimization with difference-of-convex constraints, which can be solved by an inexact difference-of-convex algorithm (IDCA). We provide the complexity analysis of IDCA for finding a nea...