[2604.01455] Infeasibility Aware Large Language Models for Combinatorial Optimization
About this article
Abstract page for arXiv paper 2604.01455: Infeasibility Aware Large Language Models for Combinatorial Optimization
Computer Science > Artificial Intelligence arXiv:2604.01455 (cs) [Submitted on 1 Apr 2026] Title:Infeasibility Aware Large Language Models for Combinatorial Optimization Authors:Yakun Wang, Min Chen, Zeguan Wu, Junyu Liu, Sitao Zhang, Zhenwen Shao View a PDF of the paper titled Infeasibility Aware Large Language Models for Combinatorial Optimization, by Yakun Wang and 5 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) are increasingly explored for NP-hard combinatorial optimization problems, but most existing methods emphasize feasible-instance solution generation and do not explicitly address infeasibility detection. We propose an infeasibility-aware framework that combines certifiable dataset construction, supervised fine-tuning, and LLM-assisted downstream search. For the minor-embedding problem, we introduce a new mathematical programming formulation together with provable zero-phase infeasibility screening, which enables scalable construction of training instances labeled either as feasible with structured certificates or as certifiably infeasible. Using training data generated through this exact optimization pipeline, we show that an 8B-parameter LLM can be fine-tuned to jointly perform solution generation and infeasibility detection. We further utilize LLM outputs as warm starts for downstream local search, providing a practical way to accelerate optimization even when the LLM outputs are imperfect. Experiments show that our fine-tune...