[2604.02990] FedSQ: Optimized Weight Averaging via Fixed Gating
About this article
Abstract page for arXiv paper 2604.02990: FedSQ: Optimized Weight Averaging via Fixed Gating
Computer Science > Machine Learning arXiv:2604.02990 (cs) [Submitted on 3 Apr 2026] Title:FedSQ: Optimized Weight Averaging via Fixed Gating Authors:Cristian Pérez-Corral, Jose I. Mestre, Alberto Fernández-Hernández, Manuel F. Dolz, José Duato, Enrique S. Quintana-Ortí View a PDF of the paper titled FedSQ: Optimized Weight Averaging via Fixed Gating, by Cristian P\'erez-Corral and Jose I. Mestre and Alberto Fern\'andez-Hern\'andez and Manuel F. Dolz and Jos\'e Duato and Enrique S. Quintana-Ort\'i View PDF HTML (experimental) Abstract:Federated learning (FL) enables collaborative training across organizations without sharing raw data, but it is hindered by statistical heterogeneity (non-i.i.d.\ client data) and by instability of naive weight averaging under client drift. In many cross-silo deployments, FL is warm-started from a strong pretrained backbone (e.g., ImageNet-1K) and then adapted to local domains. Motivated by recent evidence that ReLU-like gating regimes (structural knowledge) stabilize earlier than the remaining parameter values (quantitative knowledge), we propose FedSQ (Federated Structural-Quantitative learning), a transfer-initialized neural federated procedure based on a DualCopy, piecewise-linear view of deep networks. FedSQ freezes a structural copy of the pretrained model to induce fixed binary gating masks during federated fine-tuning, while only a quantitative copy is optimized locally and aggregated across rounds. Fixing the gating reduces learning t...