[2603.04359] Dissecting Quantization Error: A Concentration-Alignment Perspective

[2603.04359] Dissecting Quantization Error: A Concentration-Alignment Perspective

arXiv - AI 3 min read

About this article

Abstract page for arXiv paper 2603.04359: Dissecting Quantization Error: A Concentration-Alignment Perspective

Computer Science > Machine Learning arXiv:2603.04359 (cs) [Submitted on 4 Mar 2026] Title:Dissecting Quantization Error: A Concentration-Alignment Perspective Authors:Marco Federici, Boris van Breugel, Paul Whatmough, Markus Nagel View a PDF of the paper titled Dissecting Quantization Error: A Concentration-Alignment Perspective, by Marco Federici and 3 other authors View PDF HTML (experimental) Abstract:Quantization can drastically increase the efficiency of large language and vision models, but typically incurs an accuracy drop. Recently, function-preserving transforms (e.g. rotations, Hadamard transform, channel-wise scaling) have been successfully applied to reduce post-training quantization error, yet a principled explanation remains elusive. We analyze linear-layer quantization via the signal-to-quantization-noise ratio (SQNR), showing that for uniform integer quantization at a fixed bit width, SQNR decomposes into (i) the concentration of weights and activations (capturing spread and outliers), and (ii) the alignment of their dominant variation directions. This reveals an actionable insight: beyond concentration - the focus of most prior transforms (e.g. rotations or Hadamard) - improving alignment between weight and activation can further reduce quantization error. Motivated by this, we introduce block Concentration-Alignment Transforms (CAT), a lightweight linear transformation that uses a covariance estimate from a small calibration set to jointly improve concent...

Originally published on March 05, 2026. Curated by AI News.

Related Articles

UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Llms

built an open source tool that auto generates AI context files for any codebase, 150 stars in

one of the most tedious parts of working with AI coding tools is having to manually write context files every single time. CLAUDE.md, .cu...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

[R] First open-source implementation of Hebbian fast-weight write-back for the BDH architecture

The BDH (Dragon Hatchling) paper (arXiv:2509.26507) describes a Hebbian synaptic plasticity mechanism where model weights update during i...

Reddit - Machine Learning · 1 min ·
Llms

[R] A language model built from the damped harmonic oscillator equation — no transformer blocks

I've been building a neural architecture where the only learnable transform is the transfer function of a damped harmonic oscillator: H(ω...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime