[2603.01692] Reasoning as Gradient: Scaling MLE Agents Beyond Tree Search
About this article
Abstract page for arXiv paper 2603.01692: Reasoning as Gradient: Scaling MLE Agents Beyond Tree Search
Computer Science > Machine Learning arXiv:2603.01692 (cs) [Submitted on 2 Mar 2026] Title:Reasoning as Gradient: Scaling MLE Agents Beyond Tree Search Authors:Yifei Zhang, Xu Yang, Xiao Yang, Bowen Xian, Qizheng Li, Shikai Fang, Jingyuan Li, Jian Wang, Mingrui Xu, Weiqing Liu, Jiang Bian View a PDF of the paper titled Reasoning as Gradient: Scaling MLE Agents Beyond Tree Search, by Yifei Zhang and 10 other authors View PDF HTML (experimental) Abstract:LLM-based agents for machine learning engineering (MLE) predominantly rely on tree search, a form of gradient-free optimization that uses scalar validation scores to rank candidates. As LLM reasoning capabilities improve, exhaustive enumeration becomes increasingly inefficient compared to directed updates, analogous to how accurate gradients enable efficient descent over random search. We introduce \textsc{Gome}, an MLE agent that operationalizes gradient-based optimization. \textsc{Gome} maps structured diagnostic reasoning to gradient computation, success memory to momentum, and multi-trace execution to distributed optimization. Under a closed-world protocol that isolates architectural effects from external knowledge, \textsc{Gome} achieves a state-of-the-art 35.1\% any-medal rate on MLE-Bench with a restricted 12-hour budget on a single V100 GPU. Scaling experiments across 10 models reveal a critical crossover: with weaker models, tree search retains advantages by compensating for unreliable reasoning through exhaustive ex...