[2603.04359] Dissecting Quantization Error: A Concentration-Alignment Perspective
About this article
Abstract page for arXiv paper 2603.04359: Dissecting Quantization Error: A Concentration-Alignment Perspective
Computer Science > Machine Learning arXiv:2603.04359 (cs) [Submitted on 4 Mar 2026] Title:Dissecting Quantization Error: A Concentration-Alignment Perspective Authors:Marco Federici, Boris van Breugel, Paul Whatmough, Markus Nagel View a PDF of the paper titled Dissecting Quantization Error: A Concentration-Alignment Perspective, by Marco Federici and 3 other authors View PDF HTML (experimental) Abstract:Quantization can drastically increase the efficiency of large language and vision models, but typically incurs an accuracy drop. Recently, function-preserving transforms (e.g. rotations, Hadamard transform, channel-wise scaling) have been successfully applied to reduce post-training quantization error, yet a principled explanation remains elusive. We analyze linear-layer quantization via the signal-to-quantization-noise ratio (SQNR), showing that for uniform integer quantization at a fixed bit width, SQNR decomposes into (i) the concentration of weights and activations (capturing spread and outliers), and (ii) the alignment of their dominant variation directions. This reveals an actionable insight: beyond concentration - the focus of most prior transforms (e.g. rotations or Hadamard) - improving alignment between weight and activation can further reduce quantization error. Motivated by this, we introduce block Concentration-Alignment Transforms (CAT), a lightweight linear transformation that uses a covariance estimate from a small calibration set to jointly improve concent...