[2512.10534] Achieving Olympia-Level Geometry Large Language Model Agent via Complexity Boosting Reinforcement Learning
About this article
Abstract page for arXiv paper 2512.10534: Achieving Olympia-Level Geometry Large Language Model Agent via Complexity Boosting Reinforcement Learning
Computer Science > Artificial Intelligence arXiv:2512.10534 (cs) [Submitted on 11 Dec 2025 (v1), last revised 5 Mar 2026 (this version, v3)] Title:Achieving Olympia-Level Geometry Large Language Model Agent via Complexity Boosting Reinforcement Learning Authors:Haiteng Zhao, Junhao Shen, Yiming Zhang, Songyang Gao, Kuikun Liu, Tianyou Ma, Fan Zheng, Dahua Lin, Wenwei Zhang, Kai Chen View a PDF of the paper titled Achieving Olympia-Level Geometry Large Language Model Agent via Complexity Boosting Reinforcement Learning, by Haiteng Zhao and 9 other authors View PDF HTML (experimental) Abstract:Large language model (LLM) agents exhibit strong mathematical problem-solving abilities and can even solve International Mathematical Olympiad (IMO) level problems with the assistance of formal proof systems. However, due to weak heuristics for auxiliary constructions, AI for geometry problem solving remains dominated by expert models such as AlphaGeometry 2, which rely heavily on large-scale data synthesis and search for both training and evaluation. In this work, we make the first attempt to build a medalist-level LLM agent for geometry and present InternGeometry. InternGeometry overcomes the heuristic limitations in geometry by iteratively proposing propositions and auxiliary constructions, verifying them with a symbolic engine, and reflecting on the engine's feedback to guide subsequent proposals. A dynamic memory mechanism enables InternGeometry to conduct more than two hundred in...