[2602.13513] Learning Gradient Flow: Using Equation Discovery to Accelerate Engineering Optimization
Summary
This paper explores data-driven equation discovery to enhance optimization processes in engineering, introducing the Learned Gradient Flow (LGF) optimizer for improved convergence.
Why It Matters
The research addresses the challenges of costly evaluations in optimization problems, offering a novel approach that leverages learned gradient flows to accelerate convergence. This has significant implications for engineering and machine learning applications, potentially reducing computational resources and time.
Key Takeaways
- Introduces the Learned Gradient Flow (LGF) optimizer to enhance optimization efficiency.
- Utilizes trajectory data to model continuous-time dynamics in optimization problems.
- Demonstrates effectiveness through applications in engineering mechanics and scientific machine learning.
- Reduces the need for expensive evaluations of objective functions and gradients.
- Captures critical features of optimization trajectories for faster convergence.
Mathematics > Optimization and Control arXiv:2602.13513 (math) [Submitted on 13 Feb 2026] Title:Learning Gradient Flow: Using Equation Discovery to Accelerate Engineering Optimization Authors:Grant Norman, Conor Rowan, Kurt Maute, Alireza Doostan View a PDF of the paper titled Learning Gradient Flow: Using Equation Discovery to Accelerate Engineering Optimization, by Grant Norman and 3 other authors View PDF HTML (experimental) Abstract:In this work, we investigate the use of data-driven equation discovery for dynamical systems to model and forecast continuous-time dynamics of unconstrained optimization problems. To avoid expensive evaluations of the objective function and its gradient, we leverage trajectory data on the optimization variables to learn the continuous-time dynamics associated with gradient descent, Newton's method, and ADAM optimization. The discovered gradient flows are then solved as a surrogate for the original optimization problem. To this end, we introduce the Learned Gradient Flow (LGF) optimizer, which is equipped to build surrogate models of variable polynomial order in full- or reduced-dimensional spaces at user-defined intervals in the optimization process. We demonstrate the efficacy of this approach on several standard problems from engineering mechanics and scientific machine learning, including two inverse problems, structural topology optimization, and two forward solves with different discretizations. Our results suggest that the learned gra...