[2602.13513] Learning Gradient Flow: Using Equation Discovery to Accelerate Engineering Optimization

[2602.13513] Learning Gradient Flow: Using Equation Discovery to Accelerate Engineering Optimization

arXiv - Machine Learning 4 min read Article

Summary

This paper explores data-driven equation discovery to enhance optimization processes in engineering, introducing the Learned Gradient Flow (LGF) optimizer for improved convergence.

Why It Matters

The research addresses the challenges of costly evaluations in optimization problems, offering a novel approach that leverages learned gradient flows to accelerate convergence. This has significant implications for engineering and machine learning applications, potentially reducing computational resources and time.

Key Takeaways

  • Introduces the Learned Gradient Flow (LGF) optimizer to enhance optimization efficiency.
  • Utilizes trajectory data to model continuous-time dynamics in optimization problems.
  • Demonstrates effectiveness through applications in engineering mechanics and scientific machine learning.
  • Reduces the need for expensive evaluations of objective functions and gradients.
  • Captures critical features of optimization trajectories for faster convergence.

Mathematics > Optimization and Control arXiv:2602.13513 (math) [Submitted on 13 Feb 2026] Title:Learning Gradient Flow: Using Equation Discovery to Accelerate Engineering Optimization Authors:Grant Norman, Conor Rowan, Kurt Maute, Alireza Doostan View a PDF of the paper titled Learning Gradient Flow: Using Equation Discovery to Accelerate Engineering Optimization, by Grant Norman and 3 other authors View PDF HTML (experimental) Abstract:In this work, we investigate the use of data-driven equation discovery for dynamical systems to model and forecast continuous-time dynamics of unconstrained optimization problems. To avoid expensive evaluations of the objective function and its gradient, we leverage trajectory data on the optimization variables to learn the continuous-time dynamics associated with gradient descent, Newton's method, and ADAM optimization. The discovered gradient flows are then solved as a surrogate for the original optimization problem. To this end, we introduce the Learned Gradient Flow (LGF) optimizer, which is equipped to build surrogate models of variable polynomial order in full- or reduced-dimensional spaces at user-defined intervals in the optimization process. We demonstrate the efficacy of this approach on several standard problems from engineering mechanics and scientific machine learning, including two inverse problems, structural topology optimization, and two forward solves with different discretizations. Our results suggest that the learned gra...

Related Articles

Machine Learning

Ml project user give dataset and I give best model [D] [P]

Tl,dr : suggest me a solution to create a ai ml project where user will give his dataset as input and the project should give best model ...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] ICML Reviewer Acknowledgement

Hi, I'm a little confused about ICML discussion period Does the period for reviewer acknowledging responses have already ended? One of th...

Reddit - Machine Learning · 1 min ·
Llms

Claude Opus 4.6 API at 40% below Anthropic pricing – try free before you pay anything

Hey everyone I've set up a self-hosted API gateway using [New-API](QuantumNous/new-ap) to manage and distribute Claude Opus 4.6 access ac...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

[D] ICML reviewer making up false claim in acknowledgement, what to do?

In a rebuttal acknowledgement we received, the reviewer made up a claim that our method performs worse than baselines with some hyperpara...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime