[2602.17625] Catastrophic Forgetting Resilient One-Shot Incremental Federated Learning
Summary
This paper introduces One-Shot Incremental Federated Learning (OSI-FL), a novel framework that mitigates catastrophic forgetting and communication overhead in federated learning by utilizing category-specific embeddings and selective sample retention.
Why It Matters
As federated learning becomes increasingly vital for privacy-sensitive applications, addressing challenges like catastrophic forgetting and communication efficiency is crucial. OSI-FL offers a promising solution that enhances model performance while maintaining data privacy, making it relevant for researchers and practitioners in machine learning and data science.
Key Takeaways
- OSI-FL is the first framework to tackle catastrophic forgetting in federated learning with incremental data.
- The framework uses category-specific embeddings to reduce communication overhead.
- Selective Sample Retention (SSR) helps retain informative samples to prevent forgetting.
- Experimental results show OSI-FL outperforms traditional federated learning methods.
- The approach is applicable in both class-incremental and domain-incremental scenarios.
Computer Science > Machine Learning arXiv:2602.17625 (cs) [Submitted on 19 Feb 2026] Title:Catastrophic Forgetting Resilient One-Shot Incremental Federated Learning Authors:Obaidullah Zaland, Zulfiqar Ahmad Khan, Monowar Bhuyan View a PDF of the paper titled Catastrophic Forgetting Resilient One-Shot Incremental Federated Learning, by Obaidullah Zaland and 2 other authors View PDF HTML (experimental) Abstract:Modern big-data systems generate massive, heterogeneous, and geographically dispersed streams that are large-scale and privacy-sensitive, making centralization challenging. While federated learning (FL) provides a privacy-enhancing training mechanism, it assumes a static data flow and learns a collaborative model over multiple rounds, making learning with \textit{incremental} data challenging in limited-communication scenarios. This paper presents One-Shot Incremental Federated Learning (OSI-FL), the first FL framework that addresses the dual challenges of communication overhead and catastrophic forgetting. OSI-FL communicates category-specific embeddings, devised by a frozen vision-language model (VLM) from each client in a single communication round, which a pre-trained diffusion model at the server uses to synthesize new data similar to the client's data distribution. The synthesized samples are used on the server for training. However, two challenges still persist: i) tasks arriving incrementally need to retrain the global model, and ii) as future tasks arrive, re...