[2508.04853] Provable Post-Training Quantization: Theoretical Analysis of OPTQ and Qronos

[2508.04853] Provable Post-Training Quantization: Theoretical Analysis of OPTQ and Qronos

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2508.04853: Provable Post-Training Quantization: Theoretical Analysis of OPTQ and Qronos

Computer Science > Machine Learning arXiv:2508.04853 (cs) [Submitted on 6 Aug 2025 (v1), last revised 9 Apr 2026 (this version, v2)] Title:Provable Post-Training Quantization: Theoretical Analysis of OPTQ and Qronos Authors:Haoyu Zhang, Shihao Zhang, Ian Colbert, Rayan Saab View a PDF of the paper titled Provable Post-Training Quantization: Theoretical Analysis of OPTQ and Qronos, by Haoyu Zhang and 3 other authors View PDF HTML (experimental) Abstract:Post-training quantization (PTQ) has become a crucial tool for reducing the memory and compute costs of modern deep neural networks, including large language models (LLMs). Among PTQ algorithms, the OPTQ framework-also known as GPTQ-has emerged as a leading method due to its computational efficiency and strong empirical performance. Despite its widespread adoption, however, OPTQ lacks rigorous quantitative theoretical guarantees. This paper presents the first quantitative error bounds for both deterministic and stochastic variants of OPTQ, as well as for Qronos, a recent related state-of-the-art PTQ algorithm. We analyze how OPTQ's iterative procedure induces quantization error and derive non-asymptotic 2-norm error bounds that depend explicitly on the calibration data and a regularization parameter that OPTQ uses. Our analysis provides theoretical justification for several practical design choices, including the widely used heuristic of ordering features by decreasing norm, as well as guidance for selecting the regularizati...

Originally published on April 13, 2026. Curated by AI News.

Related Articles

Llms

I am not an "anti" like this guy, but still an interesting video of person interacting with chat 4o

(Posting Here because removed by Chatgpt Complaints moderators because the model here is 4o, and refuse to believe there were any safety ...

Reddit - Artificial Intelligence · 1 min ·
Llms

We built a way for two people's AI context to talk to each other (without sharing their conversations)

We've been thinking about how we use AI in our relationships. Big part of it is about other people. Talking about them, figuring out what...

Reddit - Artificial Intelligence · 1 min ·
No flattery please, Claude: I’m British | Brief letters
Llms

No flattery please, Claude: I’m British | Brief letters

AI Tools & Products · 2 min ·
Llms

Unsolved AI Mystery Is Solved Along With Lessons Learned On Why ChatGPT Became Oddly Obsessed With Gremlins And Goblins

This article discusses the resolution of an AI mystery regarding ChatGPT's unusual focus on gremlins and goblins, along with insights gai...

AI Tools & Products · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime