[2602.01082] EvoOpt-LLM: Evolving industrial optimization models with large language models
About this article
Abstract page for arXiv paper 2602.01082: EvoOpt-LLM: Evolving industrial optimization models with large language models
Computer Science > Artificial Intelligence arXiv:2602.01082 (cs) [Submitted on 1 Feb 2026 (v1), last revised 23 Mar 2026 (this version, v2)] Title:EvoOpt-LLM: Evolving industrial optimization models with large language models Authors:Yiliu He, Tianle Li, Binghao Ji, Zhiyuan Liu, Di Huang View a PDF of the paper titled EvoOpt-LLM: Evolving industrial optimization models with large language models, by Yiliu He and 4 other authors View PDF Abstract:Optimization modeling via mixed-integer linear programming (MILP) is fundamental to industrial planning and scheduling, yet translating natural-language requirements into solver-executable models and maintaining them under evolving business rules remains highly expertise-intensive. While large language models (LLMs) offer promising avenues for automation, existing methods often suffer from low data efficiency, limited solver-level validity, and poor scalability to industrial-scale problems. To address these challenges, we present EvoOpt-LLM, a unified LLM-based framework supporting the full lifecycle of industrial optimization modeling, including automated model construction, dynamic business-constraint injection, and end-to-end variable pruning. Built on a 7B-parameter LLM and adapted via parameter-efficient LoRA fine-tuning, EvoOpt-LLM achieves a generation rate of 91% and an executability rate of 65.9% with only 3,000 training samples, with critical performance gains emerging under 1,500 samples. The constraint injection module ...