[2502.03652] Improving the Convergence of Private Shuffled Gradient Methods with Public Data

[2502.03652] Improving the Convergence of Private Shuffled Gradient Methods with Public Data

arXiv - Machine Learning 4 min read Article

Summary

This article presents a novel approach to improving the convergence of private shuffled gradient methods in machine learning by integrating public data, addressing the limitations of existing techniques.

Why It Matters

The research highlights the gap between theoretical and practical applications of differentially private algorithms in machine learning. By proposing a new method that combines private and public data, it offers a potential solution to enhance accuracy while maintaining privacy, which is crucial in sensitive data applications.

Key Takeaways

  • Introduces Interleaved-ShuffleG, a hybrid method for optimization.
  • Demonstrates that data shuffling can worsen empirical excess risk compared to traditional methods.
  • Provides a new framework for analyzing privacy-accuracy trade-offs in machine learning.

Computer Science > Machine Learning arXiv:2502.03652 (cs) [Submitted on 5 Feb 2025 (v1), last revised 24 Feb 2026 (this version, v2)] Title:Improving the Convergence of Private Shuffled Gradient Methods with Public Data Authors:Shuli Jiang, Pranay Sharma, Zhiwei Steven Wu, Gauri Joshi View a PDF of the paper titled Improving the Convergence of Private Shuffled Gradient Methods with Public Data, by Shuli Jiang and 3 other authors View PDF Abstract:We consider the problem of differentially private (DP) convex empirical risk minimization (ERM). While the standard DP-SGD algorithm is theoretically well-established, practical implementations often rely on shuffled gradient methods that traverse the training data sequentially rather than sampling with replacement in each iteration. Despite their widespread use, the theoretical privacy-accuracy trade-offs of private shuffled gradient methods (\textit{DP-ShuffleG}) remain poorly understood, leading to a gap between theory and practice. In this work, we leverage privacy amplification by iteration (PABI) and a novel application of Stein's lemma to provide the first empirical excess risk bound of \textit{DP-ShuffleG}. Our result shows that data shuffling results in worse empirical excess risk for \textit{DP-ShuffleG} compared to DP-SGD. To address this limitation, we propose \textit{Interleaved-ShuffleG}, a hybrid approach that integrates public data samples in private optimization. By alternating optimization steps that use private ...

Related Articles

Llms

Study: LLMs Able to De-Anonymize User Accounts on Reddit, Hacker News & Other "Pseudonymous" Platforms; Report Co-Author Expands, Advises

Advice from the study's co-author: "Be aware that it’s not any single post that identifies you, but the combination of small details acro...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

[D] Best websites for pytorch/numpy interviews

Hello, I’m at the last year of my PHD and I’m starting to prepare interviews. I’m mainly aiming at applied scientist/research engineer or...

Reddit - Machine Learning · 1 min ·
Llms

[P] Remote sensing foundation models made easy to use.

This project enables the idea of tasking remote sensing models to acquire embeddings like we task satellites to acquire data! https://git...

Reddit - Machine Learning · 1 min ·
Machine Learning

Can AI truly be creative?

AI has no imagination. “Creativity is the ability to generate novel and valuable ideas or works through the exercise of imagination” http...

Reddit - Artificial Intelligence · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime