[2504.05615] FedEFC: Federated Learning Using Enhanced Forward Correction Against Noisy Labels
Summary
The paper presents FedEFC, a novel approach to federated learning that addresses the challenges posed by noisy labels through techniques like prestopping and loss correction.
Why It Matters
As federated learning becomes increasingly important for privacy-preserving AI, effectively managing noisy labels is critical for improving model performance. FedEFC offers a promising solution that could enhance the reliability of federated learning systems, making it relevant for researchers and practitioners in machine learning and AI.
Key Takeaways
- FedEFC introduces prestopping to prevent overfitting to mislabeled data.
- The method includes a tailored loss correction strategy for federated learning.
- Experimental results show up to 41.64% performance improvement over existing methods.
- Theoretical analysis supports the alignment of FL objectives with clean label distributions.
- Addressing noisy labels is essential for effective federated learning in heterogeneous data environments.
Computer Science > Machine Learning arXiv:2504.05615 (cs) [Submitted on 8 Apr 2025 (v1), last revised 18 Feb 2026 (this version, v3)] Title:FedEFC: Federated Learning Using Enhanced Forward Correction Against Noisy Labels Authors:Seunghun Yu, Jin-Hyun Ahn, Joonhyuk Kang View a PDF of the paper titled FedEFC: Federated Learning Using Enhanced Forward Correction Against Noisy Labels, by Seunghun Yu and 2 other authors View PDF HTML (experimental) Abstract:Federated Learning (FL) is a powerful framework for privacy-preserving distributed learning. It enables multiple clients to collaboratively train a global model without sharing raw data. However, handling noisy labels in FL remains a major challenge due to heterogeneous data distributions and communication constraints, which can severely degrade model performance. To address this issue, we propose FedEFC, a novel method designed to tackle the impact of noisy labels in FL. FedEFC mitigates this issue through two key techniques: (1) prestopping, which prevents overfitting to mislabeled data by dynamically halting training at an optimal point, and (2) loss correction, which adjusts model updates to account for label noise. In particular, we develop an effective loss correction tailored to the unique challenges of FL, including data heterogeneity and decentralized training. Furthermore, we provide a theoretical analysis, leveraging the composite proper loss property, to demonstrate that the FL objective function under noisy lab...