[2602.22751] Know What You Know: Metacognitive Entropy Calibration for Verifiable RL Reasoning
Summary
The paper proposes EGPO, a metacognitive entropy calibration framework that integrates intrinsic uncertainty into reinforcement learning with verifiable rewards, enhancing reasoning performance in large reasoning models.
Why It Matters
This research addresses the critical gap in current reinforcement learning methodologies by incorporating uncertainty into the reward system, which is essential for improving reasoning capabilities in AI models. This advancement could significantly impact fields requiring complex reasoning, such as mathematics and question answering.
Key Takeaways
- EGPO enhances large reasoning models by integrating intrinsic uncertainty.
- The framework addresses the uncertainty-reward mismatch in RLVR pipelines.
- Improved reasoning performance is demonstrated across multiple benchmarks.
- The proposed method allows for stable policy optimization while managing overconfidence.
- EGPO provides a principled approach for advancing reasoning capabilities in AI.
Computer Science > Artificial Intelligence arXiv:2602.22751 (cs) [Submitted on 26 Feb 2026] Title:Know What You Know: Metacognitive Entropy Calibration for Verifiable RL Reasoning Authors:Qiannian Zhao, Chen Yang, Jinhao Jing, Yunke Zhang, Xuhui Ren, Lu Yu, Shijie Zhang, Hongzhi Yin View a PDF of the paper titled Know What You Know: Metacognitive Entropy Calibration for Verifiable RL Reasoning, by Qiannian Zhao and 7 other authors View PDF HTML (experimental) Abstract:Large reasoning models (LRMs) have emerged as a powerful paradigm for solving complex real-world tasks. In practice, these models are predominantly trained via Reinforcement Learning with Verifiable Rewards (RLVR), yet most existing outcome-only RLVR pipelines rely almost exclusively on a binary correctness signal and largely ignore the model's intrinsic uncertainty. We term this discrepancy the uncertainty-reward mismatch, under which high- and low-uncertainty solutions are treated equivalently, preventing the policy from "Know What You Know" and impeding the shift from optimizing for correct answers to optimizing effective reasoning paths. This limitation is especially critical in reasoning-centric tasks such as mathematics and question answering, where performance hinges on the quality of the model's internal reasoning process rather than mere memorization of final answers. To address this, we propose EGPO, a metacognitive entropy calibration framework that explicitly integrates intrinsic uncertainty into ...