[2602.20003] A Secure and Private Distributed Bayesian Federated Learning Design
Summary
This paper presents a novel framework for Distributed Federated Learning (DFL) that enhances privacy, convergence speed, and robustness against adversarial attacks using a Bayesian approach.
Why It Matters
As federated learning becomes increasingly vital for privacy-preserving AI applications, this research addresses key challenges such as privacy leakage and slow convergence, making it relevant for developers and researchers in machine learning and AI safety.
Key Takeaways
- Introduces a DFL framework that integrates Byzantine robustness and privacy preservation.
- Proposes an optimization problem for neighbor selection to minimize global loss under security constraints.
- Develops a GNN-based RL algorithm for autonomous connection decisions among devices.
- Demonstrates superior robustness and efficiency compared to traditional methods.
- Addresses critical challenges in DFL, making it applicable for real-world decentralized systems.
Computer Science > Machine Learning arXiv:2602.20003 (cs) [Submitted on 23 Feb 2026] Title:A Secure and Private Distributed Bayesian Federated Learning Design Authors:Nuocheng Yang, Sihua Wang, Zhaohui Yang, Mingzhe Chen, Changchuan Yin, Kaibin Huang View a PDF of the paper titled A Secure and Private Distributed Bayesian Federated Learning Design, by Nuocheng Yang and 5 other authors View PDF HTML (experimental) Abstract:Distributed Federated Learning (DFL) enables decentralized model training across large-scale systems without a central parameter server. However, DFL faces three critical challenges: privacy leakage from honest-but-curious neighbors, slow convergence due to the lack of central coordination, and vulnerability to Byzantine adversaries aiming to degrade model accuracy. To address these issues, we propose a novel DFL framework that integrates Byzantine robustness, privacy preservation, and convergence acceleration. Within this framework, each device trains a local model using a Bayesian approach and independently selects an optimal subset of neighbors for posterior exchange. We formulate this neighbor selection as an optimization problem to minimize the global loss function under security and privacy constraints. Solving this problem is challenging because devices only possess partial network information, and the complex coupling between topology, security, and convergence remains unclear. To bridge this gap, we first analytically characterize the trade-offs ...