[2512.20745] AgentMath: Empowering Mathematical Reasoning for Large Language Models via Tool-Augmented Agent
About this article
Abstract page for arXiv paper 2512.20745: AgentMath: Empowering Mathematical Reasoning for Large Language Models via Tool-Augmented Agent
Computer Science > Artificial Intelligence arXiv:2512.20745 (cs) [Submitted on 23 Dec 2025 (v1), last revised 2 Mar 2026 (this version, v3)] Title:AgentMath: Empowering Mathematical Reasoning for Large Language Models via Tool-Augmented Agent Authors:Haipeng Luo, Huawen Feng, Qingfeng Sun, Can Xu, Kai Zheng, Yufei Wang, Tao Yang, Han Hu, Yansong Tang View a PDF of the paper titled AgentMath: Empowering Mathematical Reasoning for Large Language Models via Tool-Augmented Agent, by Haipeng Luo and 8 other authors View PDF Abstract:Large Reasoning Models (LRMs) like o3 and DeepSeek-R1 have achieved remarkable progress in reasoning tasks with long cot. However, they remain computationally inefficient and struggle with accuracy when solving problems requiring complex mathematical operations. In this work, we present AgentMath, an agent framework that seamlessly integrates language models' reasoning capabilities with code interpreters' computational precision to efficiently tackle complex mathematical problems. Our approach introduces three key innovations: (1) An automated method that converts natural language chain-of-thought into structured tool-augmented trajectories, generating high-quality supervised fine-tuning (SFT) data to alleviate data scarcity; (2) A novel agentic reinforcement learning (RL) paradigm that dynamically interleaves natural language generation with real-time code execution. This enables models to autonomously learn optimal tool-use strategies through mult...