[2604.05088] Scalar Federated Learning for Linear Quadratic Regulator
About this article
Abstract page for arXiv paper 2604.05088: Scalar Federated Learning for Linear Quadratic Regulator
Electrical Engineering and Systems Science > Systems and Control arXiv:2604.05088 (eess) [Submitted on 6 Apr 2026] Title:Scalar Federated Learning for Linear Quadratic Regulator Authors:Mohammadreza Rostami, Shahriar Talebi, Solmaz S. Kia View a PDF of the paper titled Scalar Federated Learning for Linear Quadratic Regulator, by Mohammadreza Rostami and 1 other authors View PDF HTML (experimental) Abstract:We propose ScalarFedLQR, a communication-efficient federated algorithm for model-free learning of a common policy in linear quadratic regulator (LQR) control of heterogeneous agents. The method builds on a decomposed projected gradient mechanism, in which each agent communicates only a scalar projection of a local zeroth-order gradient estimate. The server aggregates these scalar messages to reconstruct a global descent direction, reducing per-agent uplink communication from O(d) to O(1), independent of the policy dimension. Crucially, the projection-induced approximation error diminishes as the number of participating agents increases, yielding a favorable scaling law: larger fleets enable more accurate gradient recovery, admit larger stepsizes, and achieve faster linear convergence despite high dimensionality. Under standard regularity conditions, all iterates remain stabilizing and the average LQR cost decreases linearly fast. Numerical results demonstrate performance comparable to full-gradient federated LQR with substantially reduced communication. Subjects: Systems...