[2602.16436] Learning with Locally Private Examples by Inverse Weierstrass Private Stochastic Gradient Descent
Summary
This paper presents a novel method for correcting bias in binary classification tasks using locally private examples, leveraging the Inverse Weierstrass transform and introducing a new stochastic gradient descent algorithm.
Why It Matters
As data privacy concerns grow, developing methods that allow for the use of locally private data without introducing bias is crucial. This research offers a significant advancement in the field of machine learning, particularly in ensuring accurate predictions while maintaining user privacy.
Key Takeaways
- Introduces Inverse Weierstrass Private SGD (IWP-SGD) for bias correction.
- Proves that the new algorithm converges to the true population risk minimizer.
- Demonstrates empirical validation on both synthetic and real-world datasets.
- Addresses the challenge of data reusability under Local Differential Privacy.
- Highlights the importance of unbiased estimates in machine learning.
Computer Science > Machine Learning arXiv:2602.16436 (cs) [Submitted on 18 Feb 2026] Title:Learning with Locally Private Examples by Inverse Weierstrass Private Stochastic Gradient Descent Authors:Jean Dufraiche, Paul Mangold, Michaël Perrot, Marc Tommasi View a PDF of the paper titled Learning with Locally Private Examples by Inverse Weierstrass Private Stochastic Gradient Descent, by Jean Dufraiche and 3 other authors View PDF HTML (experimental) Abstract:Releasing data once and for all under noninteractive Local Differential Privacy (LDP) enables complete data reusability, but the resulting noise may create bias in subsequent analyses. In this work, we leverage the Weierstrass transform to characterize this bias in binary classification. We prove that inverting this transform leads to a bias-correction method to compute unbiased estimates of nonlinear functions on examples released under LDP. We then build a novel stochastic gradient descent algorithm called Inverse Weierstrass Private SGD (IWP-SGD). It converges to the true population risk minimizer at a rate of $\mathcal{O}(1/n)$, with $n$ the number of examples. We empirically validate IWP-SGD on binary classification tasks using synthetic and real-world datasets. Comments: Subjects: Machine Learning (cs.LG); Cryptography and Security (cs.CR); Machine Learning (stat.ML) Cite as: arXiv:2602.16436 [cs.LG] (or arXiv:2602.16436v1 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2602.16436 Focus to learn more ...