[2604.04030] Jellyfish: Zero-Shot Federated Unlearning Scheme with Knowledge Disentanglement
About this article
Abstract page for arXiv paper 2604.04030: Jellyfish: Zero-Shot Federated Unlearning Scheme with Knowledge Disentanglement
Computer Science > Cryptography and Security arXiv:2604.04030 (cs) [Submitted on 5 Apr 2026] Title:Jellyfish: Zero-Shot Federated Unlearning Scheme with Knowledge Disentanglement Authors:Houzhe Wang, Xiaojie Zhu, Chi Chen View a PDF of the paper titled Jellyfish: Zero-Shot Federated Unlearning Scheme with Knowledge Disentanglement, by Houzhe Wang and 2 other authors View PDF HTML (experimental) Abstract:With the increasing importance of data privacy and security, federated unlearning emerges as a new research field dedicated to ensuring that once specific data is deleted, federated learning models no longer retain or disclose related information. In this paper, we propose a zero-shot federated unlearning scheme, named Jellyfish. It distinguishes itself from conventional federated unlearning frameworks in four key aspects: synthetic data generation, knowledge disentanglement, loss function design, and model repair. To preserve the privacy of forgotten data, we design a zero-shot unlearning mechanism that generates error-minimization noise as proxy data for the data to be forgotten. To maintain model utility, we first propose a knowledge disentanglement mechanism that regularises the output of the final convolutional layer by restricting the number of activated channels for the data to be forgotten and encouraging activation sparsity. Next, we construct a comprehensive loss function that incorporates multiple components, including hard loss, confusion loss, distillation loss...