[2601.18150] FP8-RL: A Practical and Stable Low-Precision Stack for LLM Reinforcement Learning
About this article
Abstract page for arXiv paper 2601.18150: FP8-RL: A Practical and Stable Low-Precision Stack for LLM Reinforcement Learning
Computer Science > Machine Learning arXiv:2601.18150 (cs) [Submitted on 26 Jan 2026 (v1), last revised 10 Apr 2026 (this version, v2)] Title:FP8-RL: A Practical and Stable Low-Precision Stack for LLM Reinforcement Learning Authors:Zhaopeng Qiu, Shuang Yu, Jingqi Zhang, Shuai Zhang, Xue Huang, Jingyi Yang, Junjie Lai View a PDF of the paper titled FP8-RL: A Practical and Stable Low-Precision Stack for LLM Reinforcement Learning, by Zhaopeng Qiu and 6 other authors View PDF Abstract:Reinforcement learning (RL) for large language models (LLMs) is increasingly bottlenecked by rollout (generation), where long output sequence lengths make attention and KV-cache memory dominate end-to-end step time. FP8 offers an attractive lever for accelerating RL by reducing compute cost and memory traffic during rollout, but applying FP8 in RL introduces unique engineering and algorithmic challenges: policy weights change every step (requiring repeated quantization and weight synchronization into the inference engine) and low-precision rollouts can deviate from the higher-precision policy assumed by the trainer, causing train-inference mismatch and potential instability. This report presents a practical FP8 rollout stack for LLM RL, implemented in the veRL ecosystem with support for common training backends (e.g., FSDP/Megatron-LM) and inference engines (e.g., vLLM/SGLang). We (i) enable FP8 W8A8 linear-layer rollout using blockwise FP8 quantization, (ii) extend FP8 to KV-cache to remove long...