[2603.25099] Large Language Models as Optimization Controllers: Adaptive Continuation for SIMP Topology Optimization
About this article
Abstract page for arXiv paper 2603.25099: Large Language Models as Optimization Controllers: Adaptive Continuation for SIMP Topology Optimization
Computer Science > Computational Engineering, Finance, and Science arXiv:2603.25099 (cs) [Submitted on 26 Mar 2026] Title:Large Language Models as Optimization Controllers: Adaptive Continuation for SIMP Topology Optimization Authors:Shaoliang Yang, Jun Wang, Yunsheng Wang View a PDF of the paper titled Large Language Models as Optimization Controllers: Adaptive Continuation for SIMP Topology Optimization, by Shaoliang Yang and 2 other authors View PDF HTML (experimental) Abstract:We present a framework in which a large language model (LLM) acts as an online adaptive controller for SIMP topology optimization, replacing conventional fixed-schedule continuation with real-time, state-conditioned parameter decisions. At every $k$-th iteration, the LLM receives a structured observation$-$current compliance, grayness index, stagnation counter, checkerboard measure, volume fraction, and budget consumption$-$and outputs numerical values for the penalization exponent $p$, projection sharpness $\beta$, filter radius $r_{\min}$, and move limit $\delta$ via a Direct Numeric Control interface. A hard grayness gate prevents premature binarization, and a meta-optimization loop uses a second LLM pass to tune the agent's call frequency and gate threshold across runs. We benchmark the agent against four baselines$-$fixed (no-continuation), standard three-field continuation, an expert heuristic, and a schedule-only ablation$-$on three 2-D problems (cantilever, MBB beam, L-bracket) at $120\!\...