[2602.21399] FedVG: Gradient-Guided Aggregation for Enhanced Federated Learning
Summary
The paper presents FedVG, a novel gradient-guided aggregation framework for federated learning that enhances model performance by addressing client drift and data heterogeneity.
Why It Matters
As federated learning becomes increasingly important for privacy-preserving AI, understanding and mitigating client drift is crucial. FedVG offers a solution that improves model generalization across diverse data sources, making it relevant for researchers and practitioners in machine learning and AI.
Key Takeaways
- FedVG utilizes a global validation set to guide federated aggregation.
- The framework assesses client model performance through layerwise gradient norms.
- FedVG improves performance in heterogeneous data environments.
- It can be integrated with existing federated learning algorithms for enhanced results.
- Extensive experiments validate its effectiveness on various datasets.
Computer Science > Machine Learning arXiv:2602.21399 (cs) [Submitted on 24 Feb 2026] Title:FedVG: Gradient-Guided Aggregation for Enhanced Federated Learning Authors:Alina Devkota, Jacob Thrasher, Donald Adjeroh, Binod Bhattarai, Prashnna K. Gyawali View a PDF of the paper titled FedVG: Gradient-Guided Aggregation for Enhanced Federated Learning, by Alina Devkota and 4 other authors View PDF HTML (experimental) Abstract:Federated Learning (FL) enables collaborative model training across multiple clients without sharing their private data. However, data heterogeneity across clients leads to client drift, which degrades the overall generalization performance of the model. This effect is further compounded by overemphasis on poorly performing clients. To address this problem, we propose FedVG, a novel gradient-based federated aggregation framework that leverages a global validation set to guide the optimization process. Such a global validation set can be established using readily available public datasets, ensuring accessibility and consistency across clients without compromising privacy. In contrast to conventional approaches that prioritize client dataset volume, FedVG assesses the generalization ability of client models by measuring the magnitude of validation gradients across layers. Specifically, we compute layerwise gradient norms to derive a client-specific score that reflects how much each client needs to adjust for improved generalization on the global validation se...