[2502.10295] Fenchel-Young Variational Learning

[2502.10295] Fenchel-Young Variational Learning

arXiv - Machine Learning 4 min read Article

Summary

The paper introduces Fenchel-Young variational learning, a new class of variational methods that generalizes classical approaches, enhancing model learning capabilities and empirical performance.

Why It Matters

This research expands the toolkit for statistical learning by introducing Fenchel-Young losses, which can improve model performance and flexibility in various applications. It addresses limitations of traditional variational methods, making it significant for researchers and practitioners in machine learning.

Key Takeaways

  • Introduces a new class of variational methods based on Fenchel-Young losses.
  • Enhances classical algorithms like expectation-maximization and variational autoencoders.
  • Demonstrates empirical competitiveness and novel features in learning models.
  • Supports models with sparse observations and posteriors.
  • Provides algorithms for efficient computation of FY evidence.

Computer Science > Machine Learning arXiv:2502.10295 (cs) [Submitted on 14 Feb 2025 (v1), last revised 14 Feb 2026 (this version, v3)] Title:Fenchel-Young Variational Learning Authors:Sophia Sklaviadis, Thomas Moellenhoff, Andre Martins, Mario Figueiredo View a PDF of the paper titled Fenchel-Young Variational Learning, by Sophia Sklaviadis and 3 other authors View PDF HTML (experimental) Abstract:From a variational perspective, many statistical learning criteria involve seeking a distribution that balances empirical risk and regularization. In this paper, we broaden this perspective by introducing a new general class of variational methods based on Fenchel-Young (FY) losses, treated as divergences that generalize (and encompass) the familiar Kullback-Leibler divergence at the core of classical variational learning. Our proposed formulation -- FY variational learning -- includes as key ingredients new notions of FY free energy, FY evidence, FY evidence lower bound, and FY posterior. We derive alternating minimization and gradient backpropagation algorithms to compute (or lower bound) the FY evidence, which enables learning a wider class of models than previous variational formulations. This leads to generalized FY variants of classical algorithms, such as an FY expectation-maximization (FYEM) algorithm, and latent-variable models, such as an FY variational autoencoder (FYVAE). Our new methods are shown to be empirically competitive, often outperforming their classical coun...

Related Articles

UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
University of Tartu thesis: transfer learning boosts Estonian AI models
Machine Learning

University of Tartu thesis: transfer learning boosts Estonian AI models

AI News - General · 4 min ·
ACM Prize in Computing Honors Matei Zaharia for Foundational Contributions to Data and Machine Learning Systems
Machine Learning

ACM Prize in Computing Honors Matei Zaharia for Foundational Contributions to Data and Machine Learning Systems

AI News - General · 6 min ·
Sam Altman's Coworkers Say He Can Barely Code and Misunderstands Basic Machine Learning Concepts
Machine Learning

Sam Altman's Coworkers Say He Can Barely Code and Misunderstands Basic Machine Learning Concepts

AI News - General · 2 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime