[2410.10922] Towards Privacy-Guaranteed Label Unlearning in Vertical Federated Learning: Few-Shot Forgetting without Disclosure

[2410.10922] Towards Privacy-Guaranteed Label Unlearning in Vertical Federated Learning: Few-Shot Forgetting without Disclosure

arXiv - Machine Learning 4 min read Article

Summary

This paper introduces a novel method for label unlearning in Vertical Federated Learning (VFL), addressing privacy concerns while maintaining model performance through innovative techniques.

Why It Matters

As data privacy becomes increasingly critical, especially in machine learning, this research offers a significant advancement in label unlearning methods for VFL. It ensures that sensitive information can be effectively removed from models without compromising their utility, which is essential for compliance with privacy regulations and ethical standards in AI.

Key Takeaways

  • Introduces the first method for label unlearning in Vertical Federated Learning.
  • Utilizes a representation-level manifold mixup mechanism for effective unlearning.
  • Demonstrates strong efficacy and scalability across various datasets.
  • Maintains computational efficiency while ensuring privacy.
  • Establishes a new direction for practical unlearning in machine learning.

Computer Science > Machine Learning arXiv:2410.10922 (cs) COVID-19 e-print Important: e-prints posted on arXiv are not peer-reviewed by arXiv; they should not be relied upon without context to guide clinical practice or health-related behavior and should not be reported in news media as established information without consulting multiple experts in the field. [Submitted on 14 Oct 2024 (v1), last revised 26 Feb 2026 (this version, v3)] Title:Towards Privacy-Guaranteed Label Unlearning in Vertical Federated Learning: Few-Shot Forgetting without Disclosure Authors:Hanlin Gu, Hong Xi Tae, Chee Seng Chan, Lixin Fan View a PDF of the paper titled Towards Privacy-Guaranteed Label Unlearning in Vertical Federated Learning: Few-Shot Forgetting without Disclosure, by Hanlin Gu and 3 other authors View PDF HTML (experimental) Abstract:This paper addresses the critical challenge of unlearning in Vertical Federated Learning (VFL), a setting that has received far less attention than its horizontal counterpart. Specifically, we propose the first method tailored to \textit{label unlearning} in VFL, where labels play a dual role as both essential inputs and sensitive information. To this end, we employ a representation-level manifold mixup mechanism to generate synthetic embeddings for both unlearned and retained samples. This is to provide richer signals for the subsequent gradient-based label forgetting and recovery steps. These augmented embeddings are then subjected to gradient-based l...

Related Articles

Hub Group Using AI, Machine Learning for Real-Time Visibility of Shipments
Machine Learning

Hub Group Using AI, Machine Learning for Real-Time Visibility of Shipments

AI Events · 4 min ·
Llms

Von Hammerstein’s Ghost: What a Prussian General’s Officer Typology Can Teach Us About AI Misalignment

Greetings all - I've posted mostly in r/claudecode and r/aigamedev a couple of times previously. Working with CC for personal projects re...

Reddit - Artificial Intelligence · 1 min ·
Llms

World models will be the next big thing, bye-bye LLMs

Was at Nvidia's GTC conference recently and honestly, it was one of the most eye-opening events I've attended in a while. There was a lot...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

[D] Got my first offer after months of searching — below posted range, contract-to-hire, and worried it may pause my search. Do I take it?

I could really use some outside perspective. I’m a senior ML/CV engineer in Canada with about 5–6 years across research and industry. Mas...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime