[2602.06838] An Adaptive Differentially Private Federated Learning Framework with Bi-level Optimization

[2602.06838] An Adaptive Differentially Private Federated Learning Framework with Bi-level Optimization

arXiv - AI 4 min read Article

Summary

This paper presents an adaptive differentially private federated learning framework that addresses challenges in model efficiency and stability during training across heterogeneous data environments.

Why It Matters

The research is significant as it tackles the critical issues of data privacy and model performance in federated learning, particularly in real-world applications where data is often non-IID and device capabilities vary. By enhancing stability and accuracy, this framework could improve the deployment of federated learning in sensitive applications, such as healthcare and finance.

Key Takeaways

  • Introduces a framework to enhance federated learning efficiency under privacy constraints.
  • Utilizes adaptive gradient clipping to improve model stability.
  • Implements a lightweight local compression module to mitigate noise amplification.
  • Demonstrates improved convergence stability and accuracy on CIFAR-10 and SVHN datasets.
  • Addresses challenges posed by heterogeneous data and device variability.

Computer Science > Artificial Intelligence arXiv:2602.06838 (cs) This paper has been withdrawn by Hui Ma [Submitted on 6 Feb 2026 (v1), last revised 19 Feb 2026 (this version, v2)] Title:An Adaptive Differentially Private Federated Learning Framework with Bi-level Optimization Authors:Jin Wang, Hui Ma, Fei Xing, Ming Yan View a PDF of the paper titled An Adaptive Differentially Private Federated Learning Framework with Bi-level Optimization, by Jin Wang and 3 other authors No PDF available, click to view other formats Abstract:Federated learning enables collaborative model training across distributed clients while preserving data privacy. However, in practical deployments, device heterogeneity, non-independent, and identically distributed (Non-IID) data often lead to highly unstable and biased gradient updates. When differential privacy is enforced, conventional fixed gradient clipping and Gaussian noise injection may further amplify gradient perturbations, resulting in training oscillation and performance degradation and degraded model performance. To address these challenges, we propose an adaptive differentially private federated learning framework that explicitly targets model efficiency under heterogeneous and privacy-constrained settings. On the client side, a lightweight local compressed module is introduced to regularize intermediate representations and constrain gradient variability, thereby mitigating noise amplification during local optimization. On the server s...

Related Articles

UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Improving AI models’ ability to explain their predictions
Machine Learning

Improving AI models’ ability to explain their predictions

AI News - General · 9 min ·
Machine Learning

[P] SpeakFlow - AI Dialogue Practice Coach with GLM 5.1

Built SpeakFlow for the Z.AI Builder Series hackathon. AI dialogue practice coach that evaluates your spoken responses in real-time. Two ...

Reddit - Machine Learning · 1 min ·
Machine Learning

[R] ICML Anonymized git repos for rebuttal

A number of the papers I'm reviewing for have submitted additional figures and code through anonymized git repos (e.g. https://anonymous....

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime