[2601.13698] Does Privacy Always Harm Fairness? Data-Dependent Trade-offs via Chernoff Information Neural Estimation
About this article
Abstract page for arXiv paper 2601.13698: Does Privacy Always Harm Fairness? Data-Dependent Trade-offs via Chernoff Information Neural Estimation
Computer Science > Machine Learning arXiv:2601.13698 (cs) [Submitted on 20 Jan 2026 (v1), last revised 24 Mar 2026 (this version, v2)] Title:Does Privacy Always Harm Fairness? Data-Dependent Trade-offs via Chernoff Information Neural Estimation Authors:Arjun Nichani, Hsiang Hsu, Chun-Fu (Richard)Chen, Haewon Jeong View a PDF of the paper titled Does Privacy Always Harm Fairness? Data-Dependent Trade-offs via Chernoff Information Neural Estimation, by Arjun Nichani and 3 other authors View PDF HTML (experimental) Abstract:Fairness and privacy are two vital pillars of trustworthy machine learning. Despite extensive research on these individual topics, their relationship has received significantly less attention. In this paper, we utilize an information-theoretic measure Chernoff Information to characterize the fundamental trade-off between fairness, privacy, and accuracy, as induced by the input data distribution. We first propose Chernoff Difference, a notion of data fairness, along with its noisy variant, Noisy Chernoff Difference, which allows us to analyze both fairness and privacy simultaneously. Through simple Gaussian examples, we show that Noisy Chernoff Difference exhibits three qualitatively distinct behaviors depending on the underlying data distribution. To extend this analysis beyond synthetic settings, we develop the Chernoff Information Neural Estimator (CINE), the first neural network-based estimator of Chernoff Information for unknown distributions. We apply...