[2505.20881] Generalizable Heuristic Generation Through LLMs with Meta-Optimization
About this article
Abstract page for arXiv paper 2505.20881: Generalizable Heuristic Generation Through LLMs with Meta-Optimization
Computer Science > Machine Learning arXiv:2505.20881 (cs) [Submitted on 27 May 2025 (v1), last revised 24 Mar 2026 (this version, v2)] Title:Generalizable Heuristic Generation Through LLMs with Meta-Optimization Authors:Yiding Shi, Jianan Zhou, Wen Song, Jieyi Bi, Yaoxin Wu, Zhiguang Cao, Jie Zhang View a PDF of the paper titled Generalizable Heuristic Generation Through LLMs with Meta-Optimization, by Yiding Shi and 6 other authors View PDF Abstract:Heuristic design with large language models (LLMs) has emerged as a promising approach for tackling combinatorial optimization problems (COPs). However, existing approaches often rely on manually predefined evolutionary computation (EC) heuristic-optimizers and single-task training schemes, which may constrain the exploration of diverse heuristic algorithms and hinder the generalization of the resulting heuristics. To address these issues, we propose Meta-Optimization of Heuristics (MoH), a novel framework that operates at the optimizer level, discovering effective heuristic-optimizers through the principle of meta-learning. Specifically, MoH leverages LLMs to iteratively refine a meta-optimizer that autonomously constructs diverse heuristic-optimizers through (self-)invocation, thereby eliminating the reliance on a predefined EC heuristic-optimizer. These constructed heuristic-optimizers subsequently evolve heuristics for downstream tasks, enabling broader heuristic exploration. Moreover, MoH employs a multi-task training sch...