[2411.07102] Effectively Leveraging Momentum Terms in Stochastic Line Search Frameworks for Fast Optimization of Finite-Sum Problems
Summary
This paper presents a novel algorithmic framework that integrates momentum terms with stochastic line search methods to optimize finite-sum problems, particularly in large-scale deep learning contexts.
Why It Matters
The research addresses the challenges of efficiently optimizing finite-sum problems, which are prevalent in machine learning. By combining momentum techniques with stochastic line search, the proposed method demonstrates improved convergence and performance, making it relevant for practitioners in AI and optimization.
Key Takeaways
- Introduces a framework that combines momentum terms with stochastic line searches.
- Demonstrates state-of-the-art performance in both convex and nonconvex optimization problems.
- Highlights the importance of mini-batch persistency in improving computational efficiency.
Mathematics > Optimization and Control arXiv:2411.07102 (math) [Submitted on 11 Nov 2024 (v1), last revised 23 Feb 2026 (this version, v4)] Title:Effectively Leveraging Momentum Terms in Stochastic Line Search Frameworks for Fast Optimization of Finite-Sum Problems Authors:Matteo Lapucci, Davide Pucci View a PDF of the paper titled Effectively Leveraging Momentum Terms in Stochastic Line Search Frameworks for Fast Optimization of Finite-Sum Problems, by Matteo Lapucci and Davide Pucci View PDF HTML (experimental) Abstract:In this work, we address unconstrained finite-sum optimization problems, with particular focus on instances originating in large scale deep learning scenarios. Our main interest lies in the exploration of the relationship between recent line search approaches for stochastic optimization in the overparametrized regime and momentum directions. First, we point out that combining these two elements with computational benefits is not straightforward. To this aim, we propose a solution based on mini-batch persistency. We then introduce an algorithmic framework that exploits a mix of data persistency, conjugate-gradient type rules for the definition of the momentum parameter and stochastic line searches. The resulting algorithm provably possesses convergence properties under suitable assumptions and is empirically shown to outperform other popular methods from the literature, obtaining state-of-the-art results in both convex and nonconvex large scale training pr...