[2506.12553] Beyond Laplace and Gaussian: Exploring the Generalized Gaussian Mechanism for Private Machine Learning
About this article
Abstract page for arXiv paper 2506.12553: Beyond Laplace and Gaussian: Exploring the Generalized Gaussian Mechanism for Private Machine Learning
Computer Science > Machine Learning arXiv:2506.12553 (cs) [Submitted on 14 Jun 2025 (v1), last revised 2 Apr 2026 (this version, v2)] Title:Beyond Laplace and Gaussian: Exploring the Generalized Gaussian Mechanism for Private Machine Learning Authors:Roy Rinberg, Ilia Shumailov, Vikrant Singhal, Rachel Cummings, Nicolas Papernot View a PDF of the paper titled Beyond Laplace and Gaussian: Exploring the Generalized Gaussian Mechanism for Private Machine Learning, by Roy Rinberg and 4 other authors View PDF HTML (experimental) Abstract:Differential privacy (DP) is obtained by randomizing a data analysis algorithm, which necessarily introduces a tradeoff between its utility and privacy. Many DP mechanisms are built upon one of two underlying tools: Laplace and Gaussian additive noise mechanisms. We expand the search space of algorithms by investigating the Generalized Gaussian (GG) mechanism, which samples the additive noise term $x$ with probability proportional to $e^{-\frac{| x |}{\sigma}^{\beta} }$ for some $\beta \geq 1$ (denoted $GG_{\beta, \sigma}(f,D)$). The Laplace and Gaussian mechanisms are special cases of GG for $\beta=1$ and $\beta=2$, respectively. We prove that the full GG family satisfies differential privacy and extend the PRV accountant to support privacy loss computation for these mechanisms. We then instantiate the GG mechanism in two canonical private learning pipelines, PATE and DP-SGD. Empirically, we explore PATE and DP-SGD with the GG mechanism across...