[2604.03420] Zero-Shot Quantization via Weight-Space Arithmetic

[2604.03420] Zero-Shot Quantization via Weight-Space Arithmetic

arXiv - AI 3 min read

About this article

Abstract page for arXiv paper 2604.03420: Zero-Shot Quantization via Weight-Space Arithmetic

Computer Science > Computer Vision and Pattern Recognition arXiv:2604.03420 (cs) [Submitted on 3 Apr 2026] Title:Zero-Shot Quantization via Weight-Space Arithmetic Authors:Daniele Solombrino, Antonio Andrea Gargiulo, Adrian Robert Minut, Luca Zhou, Alessandro Zirilli, Emanuele Rodolà View a PDF of the paper titled Zero-Shot Quantization via Weight-Space Arithmetic, by Daniele Solombrino and 5 other authors View PDF HTML (experimental) Abstract:We show that robustness to post-training quantization (PTQ) is a transferable direction in weight space. We call this direction the quantization vector: extracted from a donor task by simple weight-space arithmetic, it can be used to patch a receiver model and improve robustness to PTQ-induced noise by as much as 60%, without receiver-side quantization-aware training (QAT). Because the method requires no receiver training data, it provides a zero-shot, low-cost alternative to QAT for extremely low-bit deployment. We demonstrate this on Vision Transformer (ViT) models. More broadly, our results suggest that quantization robustness is not merely a byproduct of task-specific training, but a reusable feature of weight-space geometry that can be transferred rather than retrained. Subjects: Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Machine Learning (cs.LG) Cite as: arXiv:2604.03420 [cs.CV]   (or arXiv:2604.03420v1 [cs.CV] for this version)   https://doi.org/10.48550/arXiv.2604.03420 Focus to learn mo...

Originally published on April 07, 2026. Curated by AI News.

Related Articles

Llms

Could the best LLM be able to generate a symbolic AI that is superior to itself, or is there something superior about matrices vs graphs?

Deep neural network AIs have beaten symbolic AIs across the board on many tasks, but is there a chance that symbolic AIs written by DNNs(...

Reddit - Artificial Intelligence · 1 min ·
Improving AI models’ ability to explain their predictions
Machine Learning

Improving AI models’ ability to explain their predictions

AI News - General · 9 min ·
New technique makes AI models leaner and faster while they’re still learning
Machine Learning

New technique makes AI models leaner and faster while they’re still learning

AI News - General · 9 min ·
Machine Learning

Anyone submit ML articles to ACM journals (eg. TOPML or TIST)? [D]

Have any of you submitted ML articles to ACM journals (eg. TOPML or TIST)? How long did the process take, and were the reviews high-quali...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime