[2602.15996] Exploring New Frontiers in Vertical Federated Learning: the Role of Saddle Point Reformulation
Summary
This paper explores saddle point reformulation in Vertical Federated Learning (VFL), presenting methods for efficient model training across devices with shared users.
Why It Matters
Vertical Federated Learning is crucial for privacy-preserving machine learning, allowing collaborative model training without sharing raw data. This research enhances VFL by introducing saddle point reformulation, which improves algorithm efficiency and adaptability in practical scenarios, making it relevant for advancements in federated learning techniques.
Key Takeaways
- Saddle point reformulation enhances the efficiency of Vertical Federated Learning.
- The paper introduces stochastic modifications for practical application, including compression techniques and asynchronous communication.
- Convergence estimates demonstrate the effectiveness of the proposed algorithms in addressing VFL challenges.
Mathematics > Optimization and Control arXiv:2602.15996 (math) [Submitted on 17 Feb 2026] Title:Exploring New Frontiers in Vertical Federated Learning: the Role of Saddle Point Reformulation Authors:Aleksandr Beznosikov, Georgiy Kormakov, Alexander Grigorievskiy, Mikhail Rudakov, Ruslan Nazykov, Alexander Rogozin, Anton Vakhrushev, Andrey Savchenko, Martin Takáč, Alexander Gasnikov View a PDF of the paper titled Exploring New Frontiers in Vertical Federated Learning: the Role of Saddle Point Reformulation, by Aleksandr Beznosikov and 9 other authors View PDF Abstract:The objective of Vertical Federated Learning (VFL) is to collectively train a model using features available on different devices while sharing the same users. This paper focuses on the saddle point reformulation of the VFL problem via the classical Lagrangian function. We first demonstrate how this formulation can be solved using deterministic methods. More importantly, we explore various stochastic modifications to adapt to practical scenarios, such as employing compression techniques for efficient information transmission, enabling partial participation for asynchronous communication, and utilizing coordinate selection for faster local computation. We show that the saddle point reformulation plays a key role and opens up possibilities to use mentioned extension that seem to be impossible in the standard minimization formulation. Convergence estimates are provided for each algorithm, demonstrating their effe...