[2510.14751] Beyond Multi-Token Prediction: Pretraining LLMs with Future Summaries
About this article
Abstract page for arXiv paper 2510.14751: Beyond Multi-Token Prediction: Pretraining LLMs with Future Summaries
Computer Science > Machine Learning arXiv:2510.14751 (cs) [Submitted on 16 Oct 2025 (v1), last revised 25 Mar 2026 (this version, v2)] Title:Beyond Multi-Token Prediction: Pretraining LLMs with Future Summaries Authors:Divyat Mahajan, Sachin Goyal, Badr Youbi Idrissi, Mohammad Pezeshki, Ioannis Mitliagkas, David Lopez-Paz, Kartik Ahuja View a PDF of the paper titled Beyond Multi-Token Prediction: Pretraining LLMs with Future Summaries, by Divyat Mahajan and 6 other authors View PDF HTML (experimental) Abstract:Next-token prediction (NTP) has driven the success of large language models (LLMs), but it struggles with long-horizon reasoning, planning, and creative writing, with these limitations largely attributed to teacher-forced training. Multi-token prediction (MTP) partially mitigates these issues by predicting several future tokens at once, but it mostly captures short-range dependencies and offers limited improvement. We propose future summary prediction (FSP), which trains an auxiliary head to predict a compact representation of the long-term future, preserving information relevant for long-form generations. We explore two variants of FSP: handcrafted summaries, for example, a bag of words summary of the future sequence, and learned summaries, which use embeddings produced by a reverse language model trained from right-to-left order. Large-scale pretraining experiments (3B and 8B-parameter models) demonstrate that FSP provides improvements over both NTP and MTP across ...