[2602.15304] Hybrid Federated and Split Learning for Privacy Preserving Clinical Prediction and Treatment Optimization
Summary
This article presents a hybrid framework combining Federated Learning and Split Learning to enhance privacy in clinical decision-making while optimizing treatment predictions.
Why It Matters
As healthcare increasingly relies on data-driven decision-making, maintaining patient privacy is crucial. This framework addresses privacy concerns while allowing institutions to collaborate on clinical predictions, which can lead to improved patient outcomes and more efficient healthcare systems.
Key Takeaways
- The hybrid framework merges Federated Learning and Split Learning for enhanced privacy in clinical applications.
- It allows for shared representation learning without raw data sharing, addressing privacy regulations.
- Empirical audits reveal potential privacy leakage, prompting the need for lightweight defenses.
- The approach balances predictive performance with privacy and communication costs.
- Results indicate that hybrid models outperform standalone methods in terms of utility and privacy controls.
Computer Science > Machine Learning arXiv:2602.15304 (cs) [Submitted on 17 Feb 2026] Title:Hybrid Federated and Split Learning for Privacy Preserving Clinical Prediction and Treatment Optimization Authors:Farzana Akter, Rakib Hossain, Deb Kanna Roy Toushi, Mahmood Menon Khan, Sultana Amin, Lisan Al Amin View a PDF of the paper titled Hybrid Federated and Split Learning for Privacy Preserving Clinical Prediction and Treatment Optimization, by Farzana Akter and 5 other authors View PDF HTML (experimental) Abstract:Collaborative clinical decision support is often constrained by governance and privacy rules that prevent pooling patient-level records across institutions. We present a hybrid privacy-preserving framework that combines Federated Learning (FL) and Split Learning (SL) to support decision-oriented healthcare modeling without raw-data sharing. The approach keeps feature-extraction trunks on clients while hosting prediction heads on a coordinating server, enabling shared representation learning and exposing an explicit collaboration boundary where privacy controls can be applied. Rather than assuming distributed training is inherently private, we audit leakage empirically using membership inference on cut-layer representations and study lightweight defenses based on activation clipping and additive Gaussian noise. We evaluate across three public clinical datasets under non-IID client partitions using a unified pipeline and assess performance jointly along four deployme...