[2506.18841] LongWriter-Zero: Mastering Ultra-Long Text Generation via Reinforcement Learning
About this article
Abstract page for arXiv paper 2506.18841: LongWriter-Zero: Mastering Ultra-Long Text Generation via Reinforcement Learning
Computer Science > Computation and Language arXiv:2506.18841 (cs) [Submitted on 23 Jun 2025 (v1), last revised 28 Feb 2026 (this version, v2)] Title:LongWriter-Zero: Mastering Ultra-Long Text Generation via Reinforcement Learning Authors:Yuhao Wu, Yushi Bai, Zhiqiang Hu, Roy Ka-Wei Lee, Juanzi Li View a PDF of the paper titled LongWriter-Zero: Mastering Ultra-Long Text Generation via Reinforcement Learning, by Yuhao Wu and 4 other authors View PDF Abstract:Ultra-long generation by large language models (LLMs) is a widely demanded scenario, yet it remains a significant challenge due to their maximum generation length limit and overall quality degradation as sequence length increases. Previous approaches, exemplified by LongWriter, typically rely on ''teaching'', which involves supervised fine-tuning (SFT) on synthetic long-form outputs. However, this strategy heavily depends on synthetic SFT data, which is difficult and costly to construct, often lacks coherence and consistency, and tends to be overly artificial and structurally monotonous. In this work, we propose an incentivization-based approach that, starting entirely from scratch and without relying on any annotated or synthetic data, leverages reinforcement learning (RL) to foster the emergence of ultra-long, high-quality text generation capabilities in LLMs. We perform RL training starting from a base model, similar to R1-Zero, guiding it to engage in reasoning that facilitates planning and refinement during the writ...