[2502.07977] RESIST: Resilient Decentralized Learning Using Consensus Gradient Descent
About this article
Abstract page for arXiv paper 2502.07977: RESIST: Resilient Decentralized Learning Using Consensus Gradient Descent
Computer Science > Machine Learning arXiv:2502.07977 (cs) [Submitted on 11 Feb 2025 (v1), last revised 6 Apr 2026 (this version, v2)] Title:RESIST: Resilient Decentralized Learning Using Consensus Gradient Descent Authors:Cheng Fang, Rishabh Dixit, Waheed U. Bajwa, Mert Gurbuzbalaban View a PDF of the paper titled RESIST: Resilient Decentralized Learning Using Consensus Gradient Descent, by Cheng Fang and 3 other authors View PDF Abstract:Empirical risk minimization (ERM) is a cornerstone of modern machine learning (ML), supported by advances in optimization theory that ensure efficient solutions with provable algorithmic and statistical learning rates. Privacy, memory, computation, and communication constraints necessitate data collection, processing, and storage across network-connected devices. In many applications, networks operate in decentralized settings where a central server cannot be assumed, requiring decentralized ML algorithms that are efficient and resilient. Decentralized learning, however, faces significant challenges, including an increased attack surface. This paper focuses on the man-in-the-middle (MITM) attack, wherein adversaries exploit communication vulnerabilities to inject malicious updates during training, potentially causing models to deviate from their intended ERM solutions. To address this challenge, we propose RESIST (Resilient dEcentralized learning using conSensus gradIent deScenT), an optimization algorithm designed to be robust against ad...