[2505.18602] LLM-Meta-SR: In-Context Learning for Evolving Selection Operators in Symbolic Regression
About this article
Abstract page for arXiv paper 2505.18602: LLM-Meta-SR: In-Context Learning for Evolving Selection Operators in Symbolic Regression
Computer Science > Neural and Evolutionary Computing arXiv:2505.18602 (cs) [Submitted on 24 May 2025 (v1), last revised 31 Mar 2026 (this version, v3)] Title:LLM-Meta-SR: In-Context Learning for Evolving Selection Operators in Symbolic Regression Authors:Hengzhe Zhang, Qi Chen, Bing Xue, Wolfgang Banzhaf, Mengjie Zhang View a PDF of the paper titled LLM-Meta-SR: In-Context Learning for Evolving Selection Operators in Symbolic Regression, by Hengzhe Zhang and 4 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) have revolutionized algorithm development, yet their application in symbolic regression, where algorithms automatically discover symbolic expressions from data, remains limited. In this paper, we propose a meta-learning framework that enables LLMs to automatically design selection operators for evolutionary symbolic regression algorithms. We first identify two key limitations in existing LLM-based algorithm evolution techniques: lack of semantic guidance and code bloat. The absence of semantic awareness can lead to ineffective exchange of useful code components, while bloat results in unnecessarily complex components; both can hinder evolutionary learning progress or reduce the interpretability of the designed algorithm. To address these issues, we enhance the LLM-based evolution framework for meta-symbolic regression with two key innovations: a complementary, semantics-aware selection operator and bloat control. Additionally, we embed d...