[2603.03777] LEA: Label Enumeration Attack in Vertical Federated Learning
About this article
Abstract page for arXiv paper 2603.03777: LEA: Label Enumeration Attack in Vertical Federated Learning
Computer Science > Machine Learning arXiv:2603.03777 (cs) [Submitted on 4 Mar 2026] Title:LEA: Label Enumeration Attack in Vertical Federated Learning Authors:Wenhao Jiang, Shaojing Fu, Yuchuan Luo, Lin Liu View a PDF of the paper titled LEA: Label Enumeration Attack in Vertical Federated Learning, by Wenhao Jiang and 3 other authors View PDF HTML (experimental) Abstract:A typical Vertical Federated Learning (VFL) scenario involves several participants collaboratively training a machine learning model, where each party has different features for the same samples, with labels held exclusively by one party. Since labels contain sensitive information, VFL must ensure the privacy of labels. However, existing VFL-targeted label inference attacks are either limited to specific scenarios or require auxiliary data, rendering them impractical in real-world applications. We introduce a novel Label Enumeration Attack (LEA) that, for the first time, achieves applicability across multiple VFL scenarios and eschews the need for auxiliary data. Our intuition is that an adversary, employing clustering to enumerate mappings between samples and labels, ascertains the accurate label mappings by evaluating the similarity between the benign model and the simulated models trained under each mapping. To achieve that, the first challenge is how to measure model similarity, as models trained on the same data can have different weights. Drawing from our findings, we propose an efficient approach fo...