[2505.11035] Deep Latent Variable Model based Vertical Federated Learning with Flexible Alignment and Labeling Scenarios

[2505.11035] Deep Latent Variable Model based Vertical Federated Learning with Flexible Alignment and Labeling Scenarios

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2505.11035: Deep Latent Variable Model based Vertical Federated Learning with Flexible Alignment and Labeling Scenarios

Computer Science > Machine Learning arXiv:2505.11035 (cs) [Submitted on 16 May 2025 (v1), last revised 30 Mar 2026 (this version, v2)] Title:Deep Latent Variable Model based Vertical Federated Learning with Flexible Alignment and Labeling Scenarios Authors:Kihun Hong, Sejun Park, Ganguk Hwang View a PDF of the paper titled Deep Latent Variable Model based Vertical Federated Learning with Flexible Alignment and Labeling Scenarios, by Kihun Hong and 2 other authors View PDF HTML (experimental) Abstract:Federated learning (FL) has attracted significant attention for enabling collaborative learning without exposing private data. Among the primary variants of FL, vertical federated learning (VFL) addresses feature-partitioned data held by multiple institutions, each holding complementary information for the same set of users. However, existing VFL methods often impose restrictive assumptions such as a small number of participating parties, fully aligned data, or only using labeled data. In this work, we reinterpret alignment gaps in VFL as missing data problems and propose a unified framework that accommodates both training and inference under arbitrary alignment and labeling scenarios, while supporting diverse missingness mechanisms. In the experiments on 168 configurations spanning four benchmark datasets, six training-time missingness patterns, and seven testing-time missingness patterns, our method outperforms all baselines in 160 cases with an average gap of 9.6 percentage...

Originally published on March 31, 2026. Curated by AI News.

Related Articles

Llms

Anyone here using local models mainly to keep LLM costs under control?

Been noticing that once you use LLMs for real dev work, the cost conversation gets messy fast. It is not just raw API spend. It is retrie...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

AI for Materials Science starter kit [D]

Hi everyone, I've been close to Deep Learning for a while now, and have a good grasp of the fundamentals. So for the computational chemis...

Reddit - Machine Learning · 1 min ·
‘AI-based super attacker’ threat looms as top crypto exchanges scramble for access to powerful Claude model
Llms

‘AI-based super attacker’ threat looms as top crypto exchanges scramble for access to powerful Claude model

Anthropic’s new AI model found vulnerabilities in code that has existed for years. The company said it had to restrict public access sin...

AI Tools & Products · 4 min ·
My bets on open models, mid-2026
Machine Learning

My bets on open models, mid-2026

What I expect to come next and why, focused on the open-closed gap.

AI Tools & Products · 7 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime