[2603.23004] Can Large Language Models Reason and Optimize Under Constraints?
About this article
Abstract page for arXiv paper 2603.23004: Can Large Language Models Reason and Optimize Under Constraints?
Computer Science > Artificial Intelligence arXiv:2603.23004 (cs) [Submitted on 24 Mar 2026] Title:Can Large Language Models Reason and Optimize Under Constraints? Authors:Fabien Bernier, Salah Ghamizi, Pantelis Dogoulis, Maxime Cordy View a PDF of the paper titled Can Large Language Models Reason and Optimize Under Constraints?, by Fabien Bernier and 3 other authors View PDF HTML (experimental) Abstract:Large Language Models (LLMs) have demonstrated great capabilities across diverse natural language tasks; yet their ability to solve abstraction and optimization problems with constraints remains scarcely explored. In this paper, we investigate whether LLMs can reason and optimize under the physical and operational constraints of Optimal Power Flow (OPF) problem. We introduce a challenging evaluation setup that requires a set of fundamental skills such as reasoning, structured input handling, arithmetic, and constrained optimization. Our evaluation reveals that SoTA LLMs fail in most of the tasks, and that reasoning LLMs still fail in the most complex settings. Our findings highlight critical gaps in LLMs' ability to handle structured reasoning under constraints, and this work provides a rigorous testing environment for developing more capable LLM assistants that can tackle real-world power grid optimization problems. Subjects: Artificial Intelligence (cs.AI); Machine Learning (cs.LG) Cite as: arXiv:2603.23004 [cs.AI] (or arXiv:2603.23004v1 [cs.AI] for this version) http...