[2603.28780] Byzantine-Robust and Communication-Efficient Distributed Training: Compressive and Cyclic Gradient Coding

[2603.28780] Byzantine-Robust and Communication-Efficient Distributed Training: Compressive and Cyclic Gradient Coding

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2603.28780: Byzantine-Robust and Communication-Efficient Distributed Training: Compressive and Cyclic Gradient Coding

Computer Science > Distributed, Parallel, and Cluster Computing arXiv:2603.28780 (cs) [Submitted on 17 Mar 2026] Title:Byzantine-Robust and Communication-Efficient Distributed Training: Compressive and Cyclic Gradient Coding Authors:Chengxi Li, Youssef Allouah, Rachid Guerraoui, Mikael Skoglund, Ming Xiao View a PDF of the paper titled Byzantine-Robust and Communication-Efficient Distributed Training: Compressive and Cyclic Gradient Coding, by Chengxi Li and 4 other authors View PDF HTML (experimental) Abstract:In this paper, we study the problem of distributed training (DT) under Byzantine attacks with communication constraints. While prior work has developed various robust aggregation rules at the server to enhance robustness to Byzantine attacks, the existing methods suffer from a critical limitation in that the solution error does not diminish when the local gradients sent by different devices vary considerably, as a result of data heterogeneity among the subsets held by different devices. To overcome this limitation, we propose a novel DT method, cyclic gradient coding-based DT (LAD). In LAD, the server allocates the entire training dataset to the devices before training begins. In each iteration, it assigns computational tasks redundantly to the devices using cyclic gradient coding. Each honest device then computes local gradients on a fixed number of data subsets and encodes the local gradients before transmitting to the server. The server aggregates the coded vecto...

Originally published on April 01, 2026. Curated by AI News.

Related Articles

Can Claude Opus 4.7 and Ensemble AI Models Finally Make Code Review Reliable?
Llms

Can Claude Opus 4.7 and Ensemble AI Models Finally Make Code Review Reliable?

Ensemble AI models like Claude Opus 4.7 transform code review reliability. Discover how multi-model approaches catch subtle bugs human re...

AI Tools & Products · 9 min ·
UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Improving AI models’ ability to explain their predictions
Machine Learning

Improving AI models’ ability to explain their predictions

AI News - General · 9 min ·
New technique makes AI models leaner and faster while they’re still learning
Machine Learning

New technique makes AI models leaner and faster while they’re still learning

AI News - General · 9 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime