[2602.16155] Differentially Private Non-convex Distributionally Robust Optimization
Summary
This paper presents a novel approach to differentially private non-convex distributionally robust optimization (DRO), addressing challenges in safeguarding sensitive data while optimizing under uncertainty.
Why It Matters
As real-world applications increasingly face distribution shifts and adversarial conditions, traditional optimization methods may fail. This research offers a robust framework that combines differential privacy with DRO, ensuring data security and model reliability, which is crucial for deploying machine learning systems in sensitive environments.
Key Takeaways
- Introduces DP Double-Spider, a new optimization method for DP-DRO.
- Achieves significant utility bounds under mild assumptions, improving existing methods.
- Demonstrates superior performance in experiments compared to traditional DP minimax optimization approaches.
Computer Science > Machine Learning arXiv:2602.16155 (cs) [Submitted on 18 Feb 2026] Title:Differentially Private Non-convex Distributionally Robust Optimization Authors:Difei Xu, Meng Ding, Zebin Ma, Huanyi Xie, Youming Tao, Aicha Slaitane, Di Wang View a PDF of the paper titled Differentially Private Non-convex Distributionally Robust Optimization, by Difei Xu and 6 other authors View PDF HTML (experimental) Abstract:Real-world deployments routinely face distribution shifts, group imbalances, and adversarial perturbations, under which the traditional Empirical Risk Minimization (ERM) framework can degrade severely. Distributionally Robust Optimization (DRO) addresses this issue by optimizing the worst-case expected loss over an uncertainty set of distributions, offering a principled approach to robustness. Meanwhile, as training data in DRO always involves sensitive information, safeguarding it against leakage under Differential Privacy (DP) is essential. In contrast to classical DP-ERM, DP-DRO has received much less attention due to its minimax optimization structure with uncertainty constraint. To bridge the gap, we provide a comprehensive study of DP-(finite-sum)-DRO with $\psi$-divergence and non-convex loss. First, we study DRO with general $\psi$-divergence by reformulating it as a minimization problem, and develop a novel $(\varepsilon, \delta)$-DP optimization method, called DP Double-Spider, tailored to this structure. Under mild assumptions, we show that it ach...