[2602.17625] Catastrophic Forgetting Resilient One-Shot Incremental Federated Learning

[2602.17625] Catastrophic Forgetting Resilient One-Shot Incremental Federated Learning

arXiv - Machine Learning 4 min read Article

Summary

This paper introduces One-Shot Incremental Federated Learning (OSI-FL), a novel framework that mitigates catastrophic forgetting and communication overhead in federated learning by utilizing category-specific embeddings and selective sample retention.

Why It Matters

As federated learning becomes increasingly vital for privacy-sensitive applications, addressing challenges like catastrophic forgetting and communication efficiency is crucial. OSI-FL offers a promising solution that enhances model performance while maintaining data privacy, making it relevant for researchers and practitioners in machine learning and data science.

Key Takeaways

  • OSI-FL is the first framework to tackle catastrophic forgetting in federated learning with incremental data.
  • The framework uses category-specific embeddings to reduce communication overhead.
  • Selective Sample Retention (SSR) helps retain informative samples to prevent forgetting.
  • Experimental results show OSI-FL outperforms traditional federated learning methods.
  • The approach is applicable in both class-incremental and domain-incremental scenarios.

Computer Science > Machine Learning arXiv:2602.17625 (cs) [Submitted on 19 Feb 2026] Title:Catastrophic Forgetting Resilient One-Shot Incremental Federated Learning Authors:Obaidullah Zaland, Zulfiqar Ahmad Khan, Monowar Bhuyan View a PDF of the paper titled Catastrophic Forgetting Resilient One-Shot Incremental Federated Learning, by Obaidullah Zaland and 2 other authors View PDF HTML (experimental) Abstract:Modern big-data systems generate massive, heterogeneous, and geographically dispersed streams that are large-scale and privacy-sensitive, making centralization challenging. While federated learning (FL) provides a privacy-enhancing training mechanism, it assumes a static data flow and learns a collaborative model over multiple rounds, making learning with \textit{incremental} data challenging in limited-communication scenarios. This paper presents One-Shot Incremental Federated Learning (OSI-FL), the first FL framework that addresses the dual challenges of communication overhead and catastrophic forgetting. OSI-FL communicates category-specific embeddings, devised by a frozen vision-language model (VLM) from each client in a single communication round, which a pre-trained diffusion model at the server uses to synthesize new data similar to the client's data distribution. The synthesized samples are used on the server for training. However, two challenges still persist: i) tasks arriving incrementally need to retrain the global model, and ii) as future tasks arrive, re...

Related Articles

Machine Learning

[R] ICML Anonymized git repos for rebuttal

A number of the papers I'm reviewing for have submitted additional figures and code through anonymized git repos (e.g. https://anonymous....

Reddit - Machine Learning · 1 min ·
Llms

[R] Reference model free behavioral discovery of AudiBench model organisms via Probe-Mediated Adaptive Auditing

Anthropic's AuditBench - 56 Llama 3.3 70B models with planted hidden behaviors - their best agent detects the behaviros 10-13% of the tim...

Reddit - Machine Learning · 1 min ·
UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Llms

[P] Dante-2B: I'm training a 2.1B bilingual fully open Italian/English LLM from scratch on 2×H200. Phase 1 done — here's what I've built.

The problem If you work with Italian text and local models, you know the pain. Every open-source LLM out there treats Italian as an after...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime