[2603.24392] Federated fairness-aware classification under differential privacy
About this article
Abstract page for arXiv paper 2603.24392: Federated fairness-aware classification under differential privacy
Statistics > Machine Learning arXiv:2603.24392 (stat) [Submitted on 25 Mar 2026] Title:Federated fairness-aware classification under differential privacy Authors:Gengyu Xue, Yi Yu View a PDF of the paper titled Federated fairness-aware classification under differential privacy, by Gengyu Xue and 1 other authors View PDF Abstract:Privacy and algorithmic fairness have become two central issues in modern machine learning. Although each has separately emerged as a rapidly growing research area, their joint effect remains comparatively under-explored. In this paper, we systematically study the joint impact of differential privacy and fairness on classification in a federated setting, where data are distributed across multiple servers. Targeting demographic disparity constrained classification under federated differential privacy, we propose a two-step algorithm, namely FDP-Fair. In the special case where there is only one server, we further propose a simple yet powerful algorithm, namely CDP-Fair, serving as a computationally-lightweight alternative. Under mild structural assumptions, theoretical guarantees on privacy, fairness and excess risk control are established. In particular, we disentangle the source of the private fairness-aware excess risk into a) intrinsic cost of classification, b) cost of private classification, c) non-private cost of fairness and d) private cost of fairness. Our theoretical findings are complemented by extensive numerical experiments on both synth...