[2602.22642] Compress the Easy, Explore the Hard: Difficulty-Aware Entropy Regularization for Efficient LLM Reasoning
Summary
This paper introduces a novel approach called CEEH, which combines difficulty-aware entropy regularization with reinforcement learning to enhance the efficiency of reasoning in large language models (LLMs).
Why It Matters
As LLMs become increasingly integral to various applications, optimizing their reasoning capabilities without sacrificing performance is crucial. This research addresses the balance between response length and reasoning quality, potentially improving real-world deployment of LLMs in complex tasks.
Key Takeaways
- CEEH dynamically adjusts entropy regularization based on question difficulty.
- The approach allows for aggressive compression on easier questions while maintaining exploration for harder ones.
- CEEH improves reasoning efficiency without sacrificing accuracy, as demonstrated across six benchmarks.
Computer Science > Machine Learning arXiv:2602.22642 (cs) [Submitted on 26 Feb 2026] Title:Compress the Easy, Explore the Hard: Difficulty-Aware Entropy Regularization for Efficient LLM Reasoning Authors:Qin-Wen Luo, Sheng Ren, Xiang Chen, Rui Liu, Jun Fang, Naiqiang Tan, Sheng-Jun Huang View a PDF of the paper titled Compress the Easy, Explore the Hard: Difficulty-Aware Entropy Regularization for Efficient LLM Reasoning, by Qin-Wen Luo and 6 other authors View PDF HTML (experimental) Abstract:Chain-of-Thought (CoT) has substantially empowered Large Language Models (LLMs) to tackle complex reasoning tasks, yet the verbose nature of explicit reasoning steps incurs prohibitive inference latency and computational costs, limiting real-world deployment. While existing compression methods - ranging from self-training to Reinforcement Learning (RL) with length constraints - attempt to mitigate this, they often sacrifice reasoning capability for brevity. We identify a critical failure mode in these approaches: explicitly optimizing for shorter trajectories triggers rapid entropy collapse, which prematurely shrinks the exploration space and stifles the discovery of valid reasoning paths, particularly for challenging questions requiring extensive deduction. To address this issue, we propose Compress responses for Easy questions and Explore Hard ones (CEEH), a difficulty-aware approach to RL-based efficient reasoning. CEEH dynamically assesses instance difficulty to apply selective e...