[2410.21764] Adaptive Online Mirror Descent for Tchebycheff Scalarization in Multi-Objective Learning
About this article
Abstract page for arXiv paper 2410.21764: Adaptive Online Mirror Descent for Tchebycheff Scalarization in Multi-Objective Learning
Computer Science > Machine Learning arXiv:2410.21764 (cs) [Submitted on 29 Oct 2024 (v1), last revised 26 Mar 2026 (this version, v3)] Title:Adaptive Online Mirror Descent for Tchebycheff Scalarization in Multi-Objective Learning Authors:Meitong Liu, Xiaoyuan Zhang, Chulin Xie, Kate Donahue, Han Zhao View a PDF of the paper titled Adaptive Online Mirror Descent for Tchebycheff Scalarization in Multi-Objective Learning, by Meitong Liu and 4 other authors View PDF HTML (experimental) Abstract:Multi-objective learning (MOL) aims to learn under multiple potentially conflicting objectives and strike a proper balance. While recent preference-guided MOL methods often rely on additional optimization objectives or constraints, we consider the classic Tchebycheff scalarization (TCH) that naturally allows for locating solutions with user-specified trade-offs. Due to its minimax formulation, directly optimizing TCH often leads to training oscillation and stagnation. In light of this limitation, we propose an adaptive online mirror descent algorithm for TCH, called (Ada)OMD-TCH. One of our main ingredients is an adaptive online-to-batch conversion that significantly improves solution optimality over traditional conversion in practice while maintaining the same theoretical convergence guarantees. We show that (Ada)OMD-TCH achieves a convergence rate of $\mathcal O(\sqrt{\log m/T})$, where $m$ is the number of objectives and $T$ is the number of rounds, providing a tighter dependency on ...