[2602.22508] Mirroring the Mind: Distilling Human-Like Metacognitive Strategies into Large Language Models
Summary
The paper presents Metacognitive Behavioral Tuning (MBT), a framework designed to enhance large reasoning models by incorporating human-like metacognitive strategies to improve reasoning stability and accuracy.
Why It Matters
As AI systems increasingly tackle complex reasoning tasks, understanding and improving their cognitive processes is crucial. This research addresses the common failures in reasoning models, providing a method that could significantly enhance their performance and reliability in real-world applications.
Key Takeaways
- Metacognitive Behavioral Tuning (MBT) improves reasoning in large models.
- MBT addresses structural fragility in reasoning tasks.
- The framework includes two formulations: MBT-S and MBT-R.
- Experiments show MBT outperforms existing models on multi-hop QA benchmarks.
- Implementing metacognitive strategies leads to reduced token consumption and higher accuracy.
Computer Science > Artificial Intelligence arXiv:2602.22508 (cs) [Submitted on 26 Feb 2026] Title:Mirroring the Mind: Distilling Human-Like Metacognitive Strategies into Large Language Models Authors:Ik-hwan Kim, Hyeongrok Han, Mingi Jung, Sangwon Yu, Jinseok Hong, Sang Hun Kim, Yoonyoung Choi, Sungroh Yoon View a PDF of the paper titled Mirroring the Mind: Distilling Human-Like Metacognitive Strategies into Large Language Models, by Ik-hwan Kim and 7 other authors View PDF HTML (experimental) Abstract:Large Reasoning Models (LRMs) often exhibit structural fragility in complex reasoning tasks, failing to produce correct answers even after successfully deriving valid intermediate steps. Through systematic analysis, we observe that these failures frequently stem not from a lack of reasoning capacity, but from a deficiency in self-regulatory control, where valid logic is destabilized by uncontrolled exploration or the failure to recognize logical sufficiency. Motivated by this observation, we propose Metacognitive Behavioral Tuning (MBT), a post-training framework that explicitly injects metacognitive behaviors into the model's thought process. MBT implements this via two complementary formulations: (1) MBT-S, which synthesizes rigorous reasoning traces from scratch, and (2) MBT-R, which rewrites the student's initial traces to stabilize intrinsic exploration patterns. Experiments across multi-hop QA benchmarks demonstrate that MBT consistently outperforms baselines, achievin...