[2603.00040] Attn-QAT: 4-Bit Attention With Quantization-Aware Training
About this article
Abstract page for arXiv paper 2603.00040: Attn-QAT: 4-Bit Attention With Quantization-Aware Training
Computer Science > Machine Learning arXiv:2603.00040 (cs) [Submitted on 9 Feb 2026] Title:Attn-QAT: 4-Bit Attention With Quantization-Aware Training Authors:Peiyuan Zhang, Matthew Noto, Wenxuan Tan, Chengquan Jiang, Will Lin, Wei Zhou, Hao Zhang View a PDF of the paper titled Attn-QAT: 4-Bit Attention With Quantization-Aware Training, by Peiyuan Zhang and 6 other authors View PDF HTML (experimental) Abstract:Achieving reliable 4-bit attention is a prerequisite for end-to-end FP4 computation on emerging FP4-capable GPUs, yet attention remains the main obstacle due to FP4's tiny dynamic range and attention's heavy-tailed activations. This paper presents the first systematic study of 4-bit quantization-aware training (QAT) for attention. We find that "drop-in" QAT, which naively combines an FP4 forward pass with a high-precision Flash Attention (FA)-style backward pass, leads to training instability. We identify two key principles for stable FP4 attention: (1) matching low-precision recomputation of attention scores in the backward pass, and (2) resolving implicit precision assumptions in FA's gradient calculation. Based on these insights, we propose Attn-QAT and implement fused Triton kernels for training as well as FP4 inference kernels. Across diffusion and language models, Attn-QAT recovers the quality drop from FP4 attention without explicit outlier-mitigation heuristics used in prior FP4 attention, and delivers up to a 1.5x speedup on an RTX 5090. Video demos can be fou...