[2603.23472] Byzantine-Robust and Differentially Private Federated Optimization under Weaker Assumptions
About this article
Abstract page for arXiv paper 2603.23472: Byzantine-Robust and Differentially Private Federated Optimization under Weaker Assumptions
Computer Science > Machine Learning arXiv:2603.23472 (cs) [Submitted on 24 Mar 2026] Title:Byzantine-Robust and Differentially Private Federated Optimization under Weaker Assumptions Authors:Rustem Islamov, Grigory Malinovsky, Alexander Gaponov, Aurelien Lucchi, Peter Richtárik, Eduard Gorbunov View a PDF of the paper titled Byzantine-Robust and Differentially Private Federated Optimization under Weaker Assumptions, by Rustem Islamov and 5 other authors View PDF Abstract:Federated Learning (FL) enables heterogeneous clients to collaboratively train a shared model without centralizing their raw data, offering an inherent level of privacy. However, gradients and model updates can still leak sensitive information, while malicious servers may mount adversarial attacks such as Byzantine manipulation. These vulnerabilities highlight the need to address differential privacy (DP) and Byzantine robustness within a unified framework. Existing approaches, however, often rely on unrealistic assumptions such as bounded gradients, require auxiliary server-side datasets, or fail to provide convergence guarantees. We address these limitations by proposing Byz-Clip21-SGD2M, a new algorithm that integrates robust aggregation with double momentum and carefully designed clipping. We prove high-probability convergence guarantees under standard $L$-smoothness and $\sigma$-sub-Gaussian gradient noise assumptions, thereby relaxing conditions that dominate prior work. Our analysis recovers state-o...