[2603.23994] Understanding the Challenges in Iterative Generative Optimization with LLMs
About this article
Abstract page for arXiv paper 2603.23994: Understanding the Challenges in Iterative Generative Optimization with LLMs
Computer Science > Machine Learning arXiv:2603.23994 (cs) [Submitted on 25 Mar 2026] Title:Understanding the Challenges in Iterative Generative Optimization with LLMs Authors:Allen Nie, Xavier Daull, Zhiyi Kuang, Abhinav Akkiraju, Anish Chaudhuri, Max Piasevoli, Ryan Rong, YuCheng Yuan, Prerit Choudhary, Shannon Xiao, Rasool Fakoor, Adith Swaminathan, Ching-An Cheng View a PDF of the paper titled Understanding the Challenges in Iterative Generative Optimization with LLMs, by Allen Nie and 12 other authors View PDF HTML (experimental) Abstract:Generative optimization uses large language models (LLMs) to iteratively improve artifacts (such as code, workflows or prompts) using execution feedback. It is a promising approach to building self-improving agents, yet in practice remains brittle: despite active research, only 9% of surveyed agents used any automated optimization. We argue that this brittleness arises because, to set up a learning loop, an engineer must make ``hidden'' design choices: What can the optimizer edit and what is the "right" learning evidence to provide at each update? We investigate three factors that affect most applications: the starting artifact, the credit horizon for execution traces, and batching trials and errors into learning evidence. Through case studies in MLAgentBench, Atari, and BigBench Extra Hard, we find that these design decisions can determine whether generative optimization succeeds, yet they are rarely made explicit in prior work. Diff...