[2602.16155] Differentially Private Non-convex Distributionally Robust Optimization

[2602.16155] Differentially Private Non-convex Distributionally Robust Optimization

arXiv - Machine Learning 4 min read Article

Summary

This paper presents a novel approach to differentially private non-convex distributionally robust optimization (DRO), addressing challenges in safeguarding sensitive data while optimizing under uncertainty.

Why It Matters

As real-world applications increasingly face distribution shifts and adversarial conditions, traditional optimization methods may fail. This research offers a robust framework that combines differential privacy with DRO, ensuring data security and model reliability, which is crucial for deploying machine learning systems in sensitive environments.

Key Takeaways

  • Introduces DP Double-Spider, a new optimization method for DP-DRO.
  • Achieves significant utility bounds under mild assumptions, improving existing methods.
  • Demonstrates superior performance in experiments compared to traditional DP minimax optimization approaches.

Computer Science > Machine Learning arXiv:2602.16155 (cs) [Submitted on 18 Feb 2026] Title:Differentially Private Non-convex Distributionally Robust Optimization Authors:Difei Xu, Meng Ding, Zebin Ma, Huanyi Xie, Youming Tao, Aicha Slaitane, Di Wang View a PDF of the paper titled Differentially Private Non-convex Distributionally Robust Optimization, by Difei Xu and 6 other authors View PDF HTML (experimental) Abstract:Real-world deployments routinely face distribution shifts, group imbalances, and adversarial perturbations, under which the traditional Empirical Risk Minimization (ERM) framework can degrade severely. Distributionally Robust Optimization (DRO) addresses this issue by optimizing the worst-case expected loss over an uncertainty set of distributions, offering a principled approach to robustness. Meanwhile, as training data in DRO always involves sensitive information, safeguarding it against leakage under Differential Privacy (DP) is essential. In contrast to classical DP-ERM, DP-DRO has received much less attention due to its minimax optimization structure with uncertainty constraint. To bridge the gap, we provide a comprehensive study of DP-(finite-sum)-DRO with $\psi$-divergence and non-convex loss. First, we study DRO with general $\psi$-divergence by reformulating it as a minimization problem, and develop a novel $(\varepsilon, \delta)$-DP optimization method, called DP Double-Spider, tailored to this structure. Under mild assumptions, we show that it ach...

Related Articles

Google quietly releases an offline-first AI dictation app on iOS | TechCrunch
Machine Learning

Google quietly releases an offline-first AI dictation app on iOS | TechCrunch

Google's new offline-first dictation app uses Gemma AI models to take on the apps like Wispr Flow.

TechCrunch - AI · 4 min ·
Machine Learning

How well do you understand how AI/deep learning works?

Specifically, how AI are programmed, trained, and how they perform their functions. I’ll be asking this in different subs to see if/how t...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

a fun survey to look at how consumers perceive the use of AI in fashion brand marketing. (all ages, all genders)

Hi r/artificial ! I'm posting on behalf of a friend who is conducting academic research for their dissertation. The survey looks at how c...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

I Built a Functional Cognitive Engine

Aura: https://github.com/youngbryan97/aura Aura is not a chatbot with personality prompts. It is a complete cognitive architecture — 60+ ...

Reddit - Artificial Intelligence · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime