[2509.13550] Complexity Bounds for Smooth Multiobjective Optimization
Summary
This paper investigates the oracle complexity of finding ε-Pareto stationary points in smooth multiobjective optimization, presenting new bounds and methods for achieving convergence.
Why It Matters
Understanding the complexity bounds in multiobjective optimization is crucial for advancing algorithms that can efficiently find optimal solutions in various applications, including machine learning and AI. The results provide foundational insights that could enhance the performance of optimization methods in practice.
Key Takeaways
- Establishes linear convergence bounds for first-order methods in strongly convex cases.
- Introduces min-iterate and last-iterate lower bounds for oblivious methods in convex scenarios.
- Presents tight lower bounds for nonconvex cases, emphasizing the need for more iterations to achieve desired accuracy.
Mathematics > Optimization and Control arXiv:2509.13550 (math) [Submitted on 16 Sep 2025 (v1), last revised 15 Feb 2026 (this version, v2)] Title:Complexity Bounds for Smooth Multiobjective Optimization Authors:Phillipe R. Sampaio View a PDF of the paper titled Complexity Bounds for Smooth Multiobjective Optimization, by Phillipe R. Sampaio View PDF HTML (experimental) Abstract:We study the oracle complexity of finding $\varepsilon$-Pareto stationary points in smooth multiobjective optimization with $m$ objectives. Progress is measured by the Pareto stationarity gap $\mathcal{G}(x)$, the norm of the best convex combination of objective gradients. Our analysis relies on a non-degenerate lifting that embeds hard single-objective instances into MOO instances with distinct objectives and non-singleton Pareto fronts while preserving lower bounds on $\mathcal{G}$. We establish: (i) in the $\mu$-strongly convex case, any span first-order method has worst-case linear convergence no faster than $\exp(-\Theta(T/\sqrt{\kappa}))$ after $T$ oracle calls, yielding $\Theta(\sqrt{\kappa}\log(1/\varepsilon))$ iterations and matching accelerated upper bounds; (ii) in the convex case, an $\Omega(1/T)$ min-iterate lower bound for oblivious one-step methods and a universal last-iterate lower bound $\Omega(1/T^2)$ for oblivious span methods via polynomial-degree arguments, and we further show this latter bound is loose (for general adaptive methods) by importing geometric lower bounds to obtain...