[2603.19514] Learning to Disprove: Formal Counterexample Generation with Large Language Models
About this article
Abstract page for arXiv paper 2603.19514: Learning to Disprove: Formal Counterexample Generation with Large Language Models
Computer Science > Artificial Intelligence arXiv:2603.19514 (cs) [Submitted on 19 Mar 2026] Title:Learning to Disprove: Formal Counterexample Generation with Large Language Models Authors:Zenan Li, Zhaoyu Li, Kaiyu Yang, Xiaoxing Ma, Zhendong Su View a PDF of the paper titled Learning to Disprove: Formal Counterexample Generation with Large Language Models, by Zenan Li and 4 other authors View PDF HTML (experimental) Abstract:Mathematical reasoning demands two critical, complementary skills: constructing rigorous proofs for true statements and discovering counterexamples that disprove false ones. However, current AI efforts in mathematics focus almost exclusively on proof construction, often neglecting the equally important task of finding counterexamples. In this paper, we address this gap by fine-tuning large language models (LLMs) to reason about and generate counterexamples. We formalize this task as formal counterexample generation, which requires LLMs not only to propose candidate counterexamples but also to produce formal proofs that can be automatically verified in the Lean 4 theorem prover. To enable effective learning, we introduce a symbolic mutation strategy that synthesizes diverse training data by systematically extracting theorems and discarding selected hypotheses, thereby producing diverse counterexample instances. Together with curated datasets, this strategy enables a multi-reward expert iteration framework that substantially enhances both the effectiven...