[2603.08899] ConFu: Contemplate the Future for Better Speculative Sampling
About this article
Abstract page for arXiv paper 2603.08899: ConFu: Contemplate the Future for Better Speculative Sampling
Computer Science > Computation and Language arXiv:2603.08899 (cs) [Submitted on 9 Mar 2026 (v1), last revised 17 Apr 2026 (this version, v2)] Title:ConFu: Contemplate the Future for Better Speculative Sampling Authors:Zongyue Qin, Raghavv Goel, Mukul Gagrani, Risheek Garrepalli, Mingu Lee, Yizhou Sun View a PDF of the paper titled ConFu: Contemplate the Future for Better Speculative Sampling, by Zongyue Qin and 5 other authors View PDF HTML (experimental) Abstract:Speculative decoding has emerged as a powerful approach to accelerate large language model (LLM) inference by employing lightweight draft models to propose candidate tokens that are subsequently verified by the target model. The effectiveness of this paradigm critically depends on the quality of the draft model. While recent advances such as the EAGLE series achieve state-of-the-art speedup, existing draft models remain limited by error accumulation: they condition only on the current prefix, causing their predictions to drift from the target model over steps. In this work, we propose \textbf{ConFu} (Contemplate the Future), a novel speculative decoding framework that enables draft models to anticipate the future direction of generation. ConFu introduces (i) contemplate tokens and soft prompts that allow the draft model to leverage future-oriented signals from the target model at negligible cost, (ii) a dynamic contemplate token mechanism with MoE to enable context-aware future prediction, and (iii) a training fr...