[2602.15996] Exploring New Frontiers in Vertical Federated Learning: the Role of Saddle Point Reformulation

[2602.15996] Exploring New Frontiers in Vertical Federated Learning: the Role of Saddle Point Reformulation

arXiv - Machine Learning 3 min read Article

Summary

This paper explores saddle point reformulation in Vertical Federated Learning (VFL), presenting methods for efficient model training across devices with shared users.

Why It Matters

Vertical Federated Learning is crucial for privacy-preserving machine learning, allowing collaborative model training without sharing raw data. This research enhances VFL by introducing saddle point reformulation, which improves algorithm efficiency and adaptability in practical scenarios, making it relevant for advancements in federated learning techniques.

Key Takeaways

  • Saddle point reformulation enhances the efficiency of Vertical Federated Learning.
  • The paper introduces stochastic modifications for practical application, including compression techniques and asynchronous communication.
  • Convergence estimates demonstrate the effectiveness of the proposed algorithms in addressing VFL challenges.

Mathematics > Optimization and Control arXiv:2602.15996 (math) [Submitted on 17 Feb 2026] Title:Exploring New Frontiers in Vertical Federated Learning: the Role of Saddle Point Reformulation Authors:Aleksandr Beznosikov, Georgiy Kormakov, Alexander Grigorievskiy, Mikhail Rudakov, Ruslan Nazykov, Alexander Rogozin, Anton Vakhrushev, Andrey Savchenko, Martin Takáč, Alexander Gasnikov View a PDF of the paper titled Exploring New Frontiers in Vertical Federated Learning: the Role of Saddle Point Reformulation, by Aleksandr Beznosikov and 9 other authors View PDF Abstract:The objective of Vertical Federated Learning (VFL) is to collectively train a model using features available on different devices while sharing the same users. This paper focuses on the saddle point reformulation of the VFL problem via the classical Lagrangian function. We first demonstrate how this formulation can be solved using deterministic methods. More importantly, we explore various stochastic modifications to adapt to practical scenarios, such as employing compression techniques for efficient information transmission, enabling partial participation for asynchronous communication, and utilizing coordinate selection for faster local computation. We show that the saddle point reformulation plays a key role and opens up possibilities to use mentioned extension that seem to be impossible in the standard minimization formulation. Convergence estimates are provided for each algorithm, demonstrating their effe...

Related Articles

Llms

[R] The Lyra Technique — A framework for interpreting internal cognitive states in LLMs (Zenodo, open access)

We're releasing a paper on a new framework for reading and interpreting the internal cognitive states of large language models: "The Lyra...

Reddit - Machine Learning · 1 min ·
Machine Learning

[P] citracer: a small CLI tool to trace where a concept comes from in a citation graph

Hi all, I made a small tool that I've been using for my own literature reviews and figured I'd share in case it's useful to anyone else. ...

Reddit - Machine Learning · 1 min ·
Llms

Looking to build a production-level AI/ML project (agentic systems), need guidance on what to build

Hi everyone, I’m a final-year undergraduate AI/ML student currently focusing on applied AI / agentic systems. So far, I’ve spent time und...

Reddit - ML Jobs · 1 min ·
Meta is reentering the AI race with a new model called Muse Spark | The Verge
Machine Learning

Meta is reentering the AI race with a new model called Muse Spark | The Verge

Meta Superintelligence Labs has unveiled a new AI model called Muse Spark that will soon roll out across apps like Instagram and Facebook.

The Verge - AI · 5 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime