[2211.02003] Private Blind Model Averaging - Distributed, Non-interactive, and Convergent
Summary
This paper presents Private Blind Model Averaging, a method for distributed, non-interactive, and convergent learning that enhances privacy while minimizing communication between users.
Why It Matters
The research addresses the critical need for privacy in distributed learning, particularly in edge computing scenarios. By reducing communication and synchronization requirements, it enables more efficient model training while maintaining data confidentiality, which is essential in today's data-sensitive environments.
Key Takeaways
- Introduces Blind Model Averaging as a non-interactive approach to distributed learning.
- Demonstrates that BlindAvg converges towards centralized learning with strong L2-regularization.
- Presents SoftmaxReg, a new learner with improved privacy-utility tradeoff over traditional SVMs.
- Evaluates the method on multiple datasets, showcasing its effectiveness in non-IID scenarios.
- Highlights the importance of privacy in machine learning applications, especially in edge devices.
Computer Science > Cryptography and Security arXiv:2211.02003 (cs) [Submitted on 3 Nov 2022 (v1), last revised 24 Feb 2026 (this version, v3)] Title:Private Blind Model Averaging - Distributed, Non-interactive, and Convergent Authors:Moritz Kirschte, Sebastian Meiser, Saman Ardalan, Esfandiar Mohammadi View a PDF of the paper titled Private Blind Model Averaging - Distributed, Non-interactive, and Convergent, by Moritz Kirschte and 3 other authors View PDF Abstract:Distributed differentially private learning techniques enable a large number of users to jointly learn a model without having to first centrally collect the training data. At the same time, neither the communication between the users nor the resulting model shall leak information about the training data. This kind of learning technique can be deployed to edge devices if it can be scaled up to a large number of users, particularly if the communication is reduced to a minimum: no interaction, i.e., each party only sends a single message. The best previously known methods are based on gradient averaging, which inherently requires many synchronization rounds. A promising non-interactive alternative to gradient averaging relies on so-called output perturbation: each user first locally finishes training and then submits its model for secure averaging without further synchronization. We analyze this paradigm, which we coin blind model averaging (BlindAvg), in the setting of convex and smooth empirical risk minimization...