[2505.08125] Sharp Gaussian approximations for Decentralized Federated Learning

[2505.08125] Sharp Gaussian approximations for Decentralized Federated Learning

arXiv - Machine Learning 3 min read Article

Summary

This paper presents sharp Gaussian approximations for decentralized federated learning, focusing on local SGD's convergence and statistical guarantees, enhancing robustness against adversarial attacks.

Why It Matters

As federated learning becomes increasingly relevant in privacy-sensitive environments, understanding its statistical properties is crucial for developing robust machine learning models. This research offers significant theoretical advancements that can improve the reliability and security of decentralized learning systems.

Key Takeaways

  • Introduces Gaussian approximation results for local SGD in federated learning.
  • Establishes a Berry-Esseen theorem for improved statistical guarantees.
  • Presents time-uniform Gaussian approximations for robust adversarial detection.
  • Supports Gaussian bootstrap-based tests for enhanced model evaluation.
  • Includes extensive simulations to validate theoretical findings.

Statistics > Machine Learning arXiv:2505.08125 (stat) [Submitted on 12 May 2025 (v1), last revised 24 Feb 2026 (this version, v3)] Title:Sharp Gaussian approximations for Decentralized Federated Learning Authors:Soham Bonnerjee, Sayar Karmakar, Wei Biao Wu View a PDF of the paper titled Sharp Gaussian approximations for Decentralized Federated Learning, by Soham Bonnerjee and 2 other authors View PDF HTML (experimental) Abstract:Federated Learning has gained traction in privacy-sensitive collaborative environments, with local SGD emerging as a key optimization method in decentralized settings. While its convergence properties are well-studied, asymptotic statistical guarantees beyond convergence remain limited. In this paper, we present two generalized Gaussian approximation results for local SGD and explore their implications. First, we prove a Berry-Esseen theorem for the final local SGD iterates, enabling valid multiplier bootstrap procedures. Second, motivated by robustness considerations, we introduce two distinct time-uniform Gaussian approximations for the entire trajectory of local SGD. The time-uniform approximations support Gaussian bootstrap-based tests for detecting adversarial attacks. Extensive simulations are provided to support our theoretical results. Comments: Subjects: Machine Learning (stat.ML); Machine Learning (cs.LG); Statistics Theory (math.ST) Cite as: arXiv:2505.08125 [stat.ML]   (or arXiv:2505.08125v3 [stat.ML] for this version)   https://doi.org...

Related Articles

Machine Learning

[D] ICML 2026 Average Score

Hi all, I’m curious about the current review dynamics for ICML 2026, especially after the rebuttal phase. For those who are reviewers (or...

Reddit - Machine Learning · 1 min ·
Machine Learning

[R] VOID: Video Object and Interaction Deletion (physically-consistent video inpainting)

We present VOID, a model for video object removal that aims to handle *physical interactions*, not just appearance. Most existing video i...

Reddit - Machine Learning · 1 min ·
Machine Learning

FLUX 2 Pro (2026) Sketch to Image

I sketched a cow and tested how different models interpret it into a realistic image for downstream 3D generation, turns out some models ...

Reddit - Artificial Intelligence · 1 min ·
Improving AI models’ ability to explain their predictions
Machine Learning

Improving AI models’ ability to explain their predictions

AI News - General · 9 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime