[2410.02260] FedScalar: Federated Learning with Scalar Communication for Bandwidth-Constrained Networks

[2410.02260] FedScalar: Federated Learning with Scalar Communication for Bandwidth-Constrained Networks

arXiv - Machine Learning 3 min read

About this article

Abstract page for arXiv paper 2410.02260: FedScalar: Federated Learning with Scalar Communication for Bandwidth-Constrained Networks

Computer Science > Machine Learning arXiv:2410.02260 (cs) [Submitted on 3 Oct 2024 (v1), last revised 3 Apr 2026 (this version, v3)] Title:FedScalar: Federated Learning with Scalar Communication for Bandwidth-Constrained Networks Authors:M. Rostami, S. S. Kia View a PDF of the paper titled FedScalar: Federated Learning with Scalar Communication for Bandwidth-Constrained Networks, by M. Rostami and 1 other authors View PDF HTML (experimental) Abstract:In bandwidth-constrained federated learning~(FL) settings, the repeated upload of high-dimensional model updates from agents to a central server constitutes the primary bottleneck, often rendering standard FL infeasible within practical communication budgets. We propose \emph{FedScalar}, a communication-efficient FL algorithm in which each agent uploads only two scalar values per round, regardless of the model dimension~$d$. Each agent encodes its local update difference as an inner product with a locally generated random vector and transmits the resulting scalar together with the generating seed, enabling the server to reconstruct an unbiased gradient estimate without any high-dimensional transmission. We prove that \emph{FedScalar} achieves a convergence rate of $O(d/\sqrt{K})$ to a stationary point for smooth non-convex loss functions, and show that adopting a Rademacher distribution for the random vector reduces the aggregation variance compared to the Gaussian case. Numerical simulations confirm that the dimension-free up...

Originally published on April 07, 2026. Curated by AI News.

Related Articles

Llms

Qwen3 4B outperforms cloud agents on code tasks—with Mahoraga research [R]

Hey everyone in ML. I've been working on Mahoraga, an open-source orchestrator that routes tasks across local and cloud AI agents using a...

Reddit - Machine Learning · 1 min ·
Machine Learning

Auroch - The Future of AI Memory

Auroch Engine is an external memory layer for AI assistants — designed to give models better long-term recall, personalization, and conte...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

Project Aurelia — A 3-model architecture (80B + 13B + 9B) that physically reacts to my real-time heart rate via mmWave radar, spatial awareness via Lidar, and Vibration via Accelerometer. All on a Framework Desktop + eGPU

Hey everyone, I’ve been building a multi-agent system in my spare time, and I just open-sourced the repository. I was getting tired of th...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

Help needed [D]

Heyy guyss... I had made the image dataset and was currently working on its training using the srnet model... I made it train on batches ...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime