[2505.11035] Deep Latent Variable Model based Vertical Federated Learning with Flexible Alignment and Labeling Scenarios
About this article
Abstract page for arXiv paper 2505.11035: Deep Latent Variable Model based Vertical Federated Learning with Flexible Alignment and Labeling Scenarios
Computer Science > Machine Learning arXiv:2505.11035 (cs) [Submitted on 16 May 2025 (v1), last revised 30 Mar 2026 (this version, v2)] Title:Deep Latent Variable Model based Vertical Federated Learning with Flexible Alignment and Labeling Scenarios Authors:Kihun Hong, Sejun Park, Ganguk Hwang View a PDF of the paper titled Deep Latent Variable Model based Vertical Federated Learning with Flexible Alignment and Labeling Scenarios, by Kihun Hong and 2 other authors View PDF HTML (experimental) Abstract:Federated learning (FL) has attracted significant attention for enabling collaborative learning without exposing private data. Among the primary variants of FL, vertical federated learning (VFL) addresses feature-partitioned data held by multiple institutions, each holding complementary information for the same set of users. However, existing VFL methods often impose restrictive assumptions such as a small number of participating parties, fully aligned data, or only using labeled data. In this work, we reinterpret alignment gaps in VFL as missing data problems and propose a unified framework that accommodates both training and inference under arbitrary alignment and labeling scenarios, while supporting diverse missingness mechanisms. In the experiments on 168 configurations spanning four benchmark datasets, six training-time missingness patterns, and seven testing-time missingness patterns, our method outperforms all baselines in 160 cases with an average gap of 9.6 percentage...