[2410.02260] FedScalar: Federated Learning with Scalar Communication for Bandwidth-Constrained Networks
About this article
Abstract page for arXiv paper 2410.02260: FedScalar: Federated Learning with Scalar Communication for Bandwidth-Constrained Networks
Computer Science > Machine Learning arXiv:2410.02260 (cs) [Submitted on 3 Oct 2024 (v1), last revised 3 Apr 2026 (this version, v3)] Title:FedScalar: Federated Learning with Scalar Communication for Bandwidth-Constrained Networks Authors:M. Rostami, S. S. Kia View a PDF of the paper titled FedScalar: Federated Learning with Scalar Communication for Bandwidth-Constrained Networks, by M. Rostami and 1 other authors View PDF HTML (experimental) Abstract:In bandwidth-constrained federated learning~(FL) settings, the repeated upload of high-dimensional model updates from agents to a central server constitutes the primary bottleneck, often rendering standard FL infeasible within practical communication budgets. We propose \emph{FedScalar}, a communication-efficient FL algorithm in which each agent uploads only two scalar values per round, regardless of the model dimension~$d$. Each agent encodes its local update difference as an inner product with a locally generated random vector and transmits the resulting scalar together with the generating seed, enabling the server to reconstruct an unbiased gradient estimate without any high-dimensional transmission. We prove that \emph{FedScalar} achieves a convergence rate of $O(d/\sqrt{K})$ to a stationary point for smooth non-convex loss functions, and show that adopting a Rademacher distribution for the random vector reduces the aggregation variance compared to the Gaussian case. Numerical simulations confirm that the dimension-free up...