[2210.13277] CompressedScaffnew: The First Theoretical Double Acceleration of Communication from Local Training and Compression in Distributed Optimization

[2210.13277] CompressedScaffnew: The First Theoretical Double Acceleration of Communication from Local Training and Compression in Distributed Optimization

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2210.13277: CompressedScaffnew: The First Theoretical Double Acceleration of Communication from Local Training and Compression in Distributed Optimization

Computer Science > Machine Learning arXiv:2210.13277 (cs) [Submitted on 24 Oct 2022 (v1), last revised 2 Apr 2026 (this version, v4)] Title:CompressedScaffnew: The First Theoretical Double Acceleration of Communication from Local Training and Compression in Distributed Optimization Authors:Laurent Condat, Ivan Agarský, Peter Richtárik View a PDF of the paper titled CompressedScaffnew: The First Theoretical Double Acceleration of Communication from Local Training and Compression in Distributed Optimization, by Laurent Condat and 2 other authors View PDF HTML (experimental) Abstract:In distributed optimization, a large number of machines alternate between local computations and communication with a coordinating server. Communication, which can be slow and costly, is the main bottleneck in this setting. To reduce this burden and therefore accelerate distributed gradient descent, two strategies are popular: 1) communicate less frequently; that is, perform several iterations of local computations between the communication rounds; and 2) communicate compressed information instead of full-dimensional vectors. We propose CompressedScaffnew, the first algorithm for distributed optimization that jointly harnesses these two strategies and converges linearly to an exact solution in the strongly convex setting, with a doubly accelerated rate: it benefits from the two acceleration mechanisms provided by local training and compression, namely a better dependency on the condition number o...

Originally published on April 03, 2026. Curated by AI News.

Related Articles

5 AI Models Tried to Scam Me. Some of Them Were Scary Good | WIRED
Machine Learning

5 AI Models Tried to Scam Me. Some of Them Were Scary Good | WIRED

The cyber capabilities of AI models have experts rattled. AI’s social skills may be just as dangerous.

Wired - AI · 8 min ·
Machine Learning

“AI engineers” today are just prompt engineers with better branding?

Hot take: A lot of what’s being called “AI engineering” right now feels like: prompt tweaking chaining APIs adding retries/guardrails Not...

Reddit - Artificial Intelligence · 1 min ·
Anthropic’s Mythos rollout has missed America’s cybersecurity agency | The Verge
Machine Learning

Anthropic’s Mythos rollout has missed America’s cybersecurity agency | The Verge

The Cybersecurity and Infrastructure Security Agency (CISA) doesn’t have access to Anthropic’s Mythos Preview, Axios reported.

The Verge - AI · 5 min ·
Machine Learning

How do you anonymize code for a conference submission? [D]

Hi everyone, I have a question about anonymizing code for conference submissions. I’m submitting an AI/ML paper to a conference and would...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime