[2603.19268] Full-Stack Domain Enhancement for Combustion LLMs: Construction and Optimization
About this article
Abstract page for arXiv paper 2603.19268: Full-Stack Domain Enhancement for Combustion LLMs: Construction and Optimization
Computer Science > Computation and Language arXiv:2603.19268 (cs) [Submitted on 27 Feb 2026] Title:Full-Stack Domain Enhancement for Combustion LLMs: Construction and Optimization Authors:Quanjia Xiao, Weimin Ouyang, Zonglin Yang, Tianhao Wu, Qingguo Zhou, Runze Mao, Zhi X. Chen View a PDF of the paper titled Full-Stack Domain Enhancement for Combustion LLMs: Construction and Optimization, by Quanjia Xiao and 6 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) in the direction of task adaptation and capability enhancement for professional fields demonstrate significant application potential. Nevertheless, for complex physical systems such as combustion science, general-purpose LLMs often generate severe hallucinations due to insufficient domain knowledge and the inability to adhere to physical conservation laws. To address this issue, we propose the first full-stack domain-enhanced LLM workflow tailored for the field of combustion science, which integrates automated domain corpus construction, incremental pre-training, instruction fine-tuning, and verifiable reward-based reinforcement learning. This workflow ensures that the model truly internalizes physical laws rather than merely learning textual statistical patterns. We also release FlameBench, a standardized evaluation benchmark specifically designed for complex reasoning tasks in combustion science. Experimental results demonstrate that the model developed in this work significantly outp...