[2512.18857] CORE: Concept-Oriented Reinforcement for Bridging the Definition-Application Gap in Mathematical Reasoning
About this article
Abstract page for arXiv paper 2512.18857: CORE: Concept-Oriented Reinforcement for Bridging the Definition-Application Gap in Mathematical Reasoning
Computer Science > Artificial Intelligence arXiv:2512.18857 (cs) [Submitted on 21 Dec 2025 (v1), last revised 2 Mar 2026 (this version, v2)] Title:CORE: Concept-Oriented Reinforcement for Bridging the Definition-Application Gap in Mathematical Reasoning Authors:Zijun Gao, Zhikun Xu, Xiao Ye, Ben Zhou View a PDF of the paper titled CORE: Concept-Oriented Reinforcement for Bridging the Definition-Application Gap in Mathematical Reasoning, by Zijun Gao and 3 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) often solve challenging math exercises yet fail to apply the concept right when the problem requires genuine understanding. Popular Reinforcement Learning with Verifiable Rewards (RLVR) pipelines reinforce final answers but provide little fine-grained conceptual signal, so models improve at pattern reuse rather than conceptual applications. We introduce CORE (Concept-Oriented REinforcement), an RL training framework that turns explicit concepts into a controllable supervision signal. Starting from a high-quality, low-contamination textbook resource that links verifiable exercises to concise concept descriptions, we run a sanity probe showing LLMs can restate definitions but fail concept-linked quizzes, quantifying the conceptual reasoning gap. CORE then (i) synthesizes concept-aligned quizzes, (ii) injects brief concept snippets during rollouts to elicit concept-primed trajectories, and (iii) reinforces conceptual reasoning via trajectory rep...