[2508.04853] Provable Post-Training Quantization: Theoretical Analysis of OPTQ and Qronos
About this article
Abstract page for arXiv paper 2508.04853: Provable Post-Training Quantization: Theoretical Analysis of OPTQ and Qronos
Computer Science > Machine Learning arXiv:2508.04853 (cs) [Submitted on 6 Aug 2025 (v1), last revised 9 Apr 2026 (this version, v2)] Title:Provable Post-Training Quantization: Theoretical Analysis of OPTQ and Qronos Authors:Haoyu Zhang, Shihao Zhang, Ian Colbert, Rayan Saab View a PDF of the paper titled Provable Post-Training Quantization: Theoretical Analysis of OPTQ and Qronos, by Haoyu Zhang and 3 other authors View PDF HTML (experimental) Abstract:Post-training quantization (PTQ) has become a crucial tool for reducing the memory and compute costs of modern deep neural networks, including large language models (LLMs). Among PTQ algorithms, the OPTQ framework-also known as GPTQ-has emerged as a leading method due to its computational efficiency and strong empirical performance. Despite its widespread adoption, however, OPTQ lacks rigorous quantitative theoretical guarantees. This paper presents the first quantitative error bounds for both deterministic and stochastic variants of OPTQ, as well as for Qronos, a recent related state-of-the-art PTQ algorithm. We analyze how OPTQ's iterative procedure induces quantization error and derive non-asymptotic 2-norm error bounds that depend explicitly on the calibration data and a regularization parameter that OPTQ uses. Our analysis provides theoretical justification for several practical design choices, including the widely used heuristic of ordering features by decreasing norm, as well as guidance for selecting the regularizati...