[2510.05132] Training Large Language Models To Reason In Parallel With Global Forking Tokens
About this article
Abstract page for arXiv paper 2510.05132: Training Large Language Models To Reason In Parallel With Global Forking Tokens
Computer Science > Computation and Language arXiv:2510.05132 (cs) [Submitted on 1 Oct 2025 (v1), last revised 2 Mar 2026 (this version, v3)] Title:Training Large Language Models To Reason In Parallel With Global Forking Tokens Authors:Sheng Jia, Xiao Wang, Shiva Prasad Kasiviswanathan View a PDF of the paper titled Training Large Language Models To Reason In Parallel With Global Forking Tokens, by Sheng Jia and 2 other authors View PDF HTML (experimental) Abstract:Although LLMs have demonstrated improved performance by scaling parallel test-time compute, doing so relies on generating reasoning paths that are both diverse and accurate. For challenging problems, the forking tokens that trigger diverse yet correct reasoning modes are typically deep in the sampling tree. Consequently, common strategies to encourage diversity, such as temperature scaling, encounter a worsened trade-off between diversity and accuracy. Motivated by this challenge, we treat parallel reasoning as a set-of-next-token-prediction problem and incorporate a set-based global loss into Supervised Fine-Tuning (SFT) using bipartite matching between global forking tokens and unique reasoning traces. We observe that whereas naive fine-tuning with multiple reasoning traces collapses these unique reasoning modes, our proposed method, Set Supervised Fine-Tuning (SSFT), preserves these modes and produces emergent global forking tokens. Global Forking Policy Optimization (GFPO) leverages these maximally steerable ...