[2603.23911] Self-Distillation for Multi-Token Prediction
About this article
Abstract page for arXiv paper 2603.23911: Self-Distillation for Multi-Token Prediction
Computer Science > Computation and Language arXiv:2603.23911 (cs) [Submitted on 25 Mar 2026] Title:Self-Distillation for Multi-Token Prediction Authors:Guoliang Zhao, Ruobing Xie, An Wang, Shuaipeng Li, Huaibing Xie, Xingwu Sun View a PDF of the paper titled Self-Distillation for Multi-Token Prediction, by Guoliang Zhao and 5 other authors View PDF HTML (experimental) Abstract:As Large Language Models (LLMs) scale up, inference efficiency becomes a critical bottleneck. Multi-Token Prediction (MTP) could accelerate LLM inference by predicting multiple future tokens in parallel. However, existing MTP approaches still face two challenges: limited acceptance rates of MTP heads, and difficulties in jointly training multiple MTP heads. Therefore, we propose MTP-D, a simple yet effective self-distillation method with minimal additional training cost, which boosts MTP head acceptance rates (+7.5\%) while maximumly preserving main-head performance. We also introduce a looped extension strategy for MTP-D, enabling effective and economical MTP head extension and further significant inference speedup to 1-head MTP (+220.4\%). Moreover, we systematically explore and validate key insights on the distillation strategies and the potential scalability of MTP through extensive experiments on seven benchmarks. These results demonstrate that our MTP-D and looped extension strategy effectively enhance MTP-head performance and inference efficiency, facilitating the practical usage of MTP in LLM...