[2602.17614] Guarding the Middle: Protecting Intermediate Representations in Federated Split Learning

[2602.17614] Guarding the Middle: Protecting Intermediate Representations in Federated Split Learning

arXiv - Machine Learning 4 min read Article

Summary

This paper presents KD-UFSL, a method to enhance privacy in federated split learning by minimizing data leakage through intermediate representations while maintaining model utility.

Why It Matters

As federated learning becomes more prevalent in handling sensitive data, ensuring privacy without sacrificing performance is crucial. This research addresses the vulnerabilities in intermediate data representations, providing a solution that balances privacy and utility, which is essential for large-scale applications.

Key Takeaways

  • KD-UFSL employs k-anonymity and differential privacy to protect client data.
  • The method significantly reduces the risk of data leakage through intermediate representations.
  • Experiments show a trade-off between privacy enhancement and model utility preservation.
  • The approach is suitable for big data applications requiring privacy and performance.
  • Demonstrates the effectiveness of privacy-enhancing techniques in federated learning.

Computer Science > Machine Learning arXiv:2602.17614 (cs) [Submitted on 19 Feb 2026] Title:Guarding the Middle: Protecting Intermediate Representations in Federated Split Learning Authors:Obaidullah Zaland, Sajib Mistry, Monowar Bhuyan View a PDF of the paper titled Guarding the Middle: Protecting Intermediate Representations in Federated Split Learning, by Obaidullah Zaland and 2 other authors View PDF HTML (experimental) Abstract:Big data scenarios, where massive, heterogeneous datasets are distributed across clients, demand scalable, privacy-preserving learning methods. Federated learning (FL) enables decentralized training of machine learning (ML) models across clients without data centralization. Decentralized training, however, introduces a computational burden on client devices. U-shaped federated split learning (UFSL) offloads a fraction of the client computation to the server while keeping both data and labels on the clients' side. However, the intermediate representations (i.e., smashed data) shared by clients with the server are prone to exposing clients' private data. To reduce exposure of client data through intermediate data representations, this work proposes k-anonymous differentially private UFSL (KD-UFSL), which leverages privacy-enhancing techniques such as microaggregation and differential privacy to minimize data leakage from the smashed data transferred to the server. We first demonstrate that an adversary can access private client data from intermedi...

Related Articles

Machine Learning

[R] ICML Anonymized git repos for rebuttal

A number of the papers I'm reviewing for have submitted additional figures and code through anonymized git repos (e.g. https://anonymous....

Reddit - Machine Learning · 1 min ·
Llms

[R] Reference model free behavioral discovery of AudiBench model organisms via Probe-Mediated Adaptive Auditing

Anthropic's AuditBench - 56 Llama 3.3 70B models with planted hidden behaviors - their best agent detects the behaviros 10-13% of the tim...

Reddit - Machine Learning · 1 min ·
UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Llms

[P] Dante-2B: I'm training a 2.1B bilingual fully open Italian/English LLM from scratch on 2×H200. Phase 1 done — here's what I've built.

The problem If you work with Italian text and local models, you know the pain. Every open-source LLM out there treats Italian as an after...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime