[2603.29957] Think Anywhere in Code Generation
About this article
Abstract page for arXiv paper 2603.29957: Think Anywhere in Code Generation
Computer Science > Software Engineering arXiv:2603.29957 (cs) [Submitted on 31 Mar 2026 (v1), last revised 2 Apr 2026 (this version, v2)] Title:Think Anywhere in Code Generation Authors:Xue Jiang, Tianyu Zhang, Ge Li, Mengyang Liu, Taozhi Chen, Zhenhua Xu, Binhua Li, Wenpin Jiao, Zhi Jin, Yongbin Li, Yihong Dong View a PDF of the paper titled Think Anywhere in Code Generation, by Xue Jiang and 10 other authors View PDF HTML (experimental) Abstract:Recent advances in reasoning Large Language Models (LLMs) have primarily relied on upfront thinking, where reasoning occurs before final answer. However, this approach suffers from critical limitations in code generation, where upfront thinking is often insufficient as problems' full complexity only reveals itself during code implementation. Moreover, it cannot adaptively allocate reasoning effort throughout the code generation process where difficulty varies significantly. In this paper, we propose Think-Anywhere, a novel reasoning mechanism that enables LLMs to invoke thinking on-demand at any token position during code generation. We achieve Think-Anywhere by first teaching LLMs to imitate the reasoning patterns through cold-start training, then leveraging outcome-based RL rewards to drive the model's autonomous exploration of when and where to invoke reasoning. Extensive experiments on four mainstream code generation benchmarks (i.e., LeetCode, LiveCodeBench, HumanEval, and MBPP) show that Think-Anywhere achieves state-of-the...