[2506.08125] Not All Tokens Matter: Towards Efficient LLM Reasoning via Token Significance in Reinforcement Learning
About this article
Abstract page for arXiv paper 2506.08125: Not All Tokens Matter: Towards Efficient LLM Reasoning via Token Significance in Reinforcement Learning
Computer Science > Machine Learning arXiv:2506.08125 (cs) [Submitted on 9 Jun 2025 (v1), last revised 6 Apr 2026 (this version, v2)] Title:Not All Tokens Matter: Towards Efficient LLM Reasoning via Token Significance in Reinforcement Learning Authors:Hanbing Liu, Lang Cao, Yuanyi Ren, Mengyu Zhou, Haoyu Dong, Xiaojun Ma, Shi Han, Dongmei Zhang View a PDF of the paper titled Not All Tokens Matter: Towards Efficient LLM Reasoning via Token Significance in Reinforcement Learning, by Hanbing Liu and 7 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) show strong reasoning abilities but often produce unnecessarily long explanations that reduce efficiency. Although reinforcement learning (RL) has been used to improve reasoning, most methods focus on accuracy and rely on uniform length-based rewards that overlook the differing contributions of individual tokens, often harming correctness. We revisit length optimization in RL through the perspective of token significance. Observing that many chain-of-thought (CoT) tokens contribute little to the final answer, we introduce a significance-aware length reward that selectively penalizes insignificance tokens, reducing redundancy while preserving essential reasoning. We also propose a dynamic length reward that encourages more detailed reasoning early in training and gradually shifts toward conciseness as learning progresses. Integrating these components into standard policy optimization yields a framewor...