[2604.00449] Convergence of Byzantine-Resilient Gradient Tracking via Probabilistic Edge Dropout
About this article
Abstract page for arXiv paper 2604.00449: Convergence of Byzantine-Resilient Gradient Tracking via Probabilistic Edge Dropout
Computer Science > Machine Learning arXiv:2604.00449 (cs) [Submitted on 1 Apr 2026] Title:Convergence of Byzantine-Resilient Gradient Tracking via Probabilistic Edge Dropout Authors:Amirhossein Dezhboro, Fateme Maleki, Arman Adibi, Erfan Amini, Jose E. Ramirez-Marquez View a PDF of the paper titled Convergence of Byzantine-Resilient Gradient Tracking via Probabilistic Edge Dropout, by Amirhossein Dezhboro and 4 other authors View PDF HTML (experimental) Abstract:We study distributed optimization over networks with Byzantine agents that may send arbitrary adversarial messages. We propose \emph{Gradient Tracking with Probabilistic Edge Dropout} (GT-PD), a stochastic gradient tracking method that preserves the convergence properties of gradient tracking under adversarial communication. GT-PD combines two complementary defense layers: a universal self-centered projection that clips each incoming message to a ball of radius $\tau$ around the receiving agent, and a fully decentralized probabilistic dropout rule driven by a dual-metric trust score in the decision and tracking channels. This design bounds adversarial perturbations while preserving the doubly stochastic mixing structure, a property often lost under robust aggregation in decentralized settings. Under complete Byzantine isolation ($p_b=0$), GT-PD converges linearly to a neighborhood determined solely by stochastic gradient variance. For partial isolation ($p_b>0$), we introduce \emph{Gradient Tracking with Probabilis...