[2604.02556] Fast NF4 Dequantization Kernels for Large Language Model Inference
About this article
Abstract page for arXiv paper 2604.02556: Fast NF4 Dequantization Kernels for Large Language Model Inference
Computer Science > Machine Learning arXiv:2604.02556 (cs) [Submitted on 2 Apr 2026] Title:Fast NF4 Dequantization Kernels for Large Language Model Inference Authors:Xiangbo Qi, Chaoyi Jiang, Murali Annavaram View a PDF of the paper titled Fast NF4 Dequantization Kernels for Large Language Model Inference, by Xiangbo Qi and 2 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) have grown beyond the memory capacity of single GPU devices, necessitating quantization techniques for practical deployment. While NF4 (4-bit NormalFloat) quantization enables 4$\times$ memory reduction, inference on current NVIDIA GPUs (e.g., Ampere A100) requires expensive dequantization back to FP16 format, creating a critical performance bottleneck. This paper presents a lightweight shared memory optimization that addresses this gap through principled memory hierarchy exploitation while maintaining full ecosystem compatibility. We compare our technique against the open-source BitsAndBytes implementation, achieving 2.0--2.2$\times$ kernel speedup across three models (Gemma 27B, Qwen3 32B, and Llama3.3 70B) and up to 1.54$\times$ end-to-end improvement by leveraging the 12--15$\times$ latency advantage of shared memory over global memory access. Our optimization reduces instruction counts through simplified indexing logic while using only 64 bytes of shared memory per thread block, demonstrating that lightweight optimizations can deliver substantial performance gains wit...