[2602.23795] GRAIL: Post-hoc Compensation by Linear Reconstruction for Compressed Networks

[2602.23795] GRAIL: Post-hoc Compensation by Linear Reconstruction for Compressed Networks

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2602.23795: GRAIL: Post-hoc Compensation by Linear Reconstruction for Compressed Networks

Computer Science > Machine Learning arXiv:2602.23795 (cs) [Submitted on 27 Feb 2026] Title:GRAIL: Post-hoc Compensation by Linear Reconstruction for Compressed Networks Authors:Wenwu Tang, Dong Wang, Lothar Thiele, Olga Saukh View a PDF of the paper titled GRAIL: Post-hoc Compensation by Linear Reconstruction for Compressed Networks, by Wenwu Tang and 3 other authors View PDF HTML (experimental) Abstract:Structured deep model compression methods are hardware-friendly and substantially reduce memory and inference costs. However, under aggressive compression, the resulting accuracy degradation often necessitates post-compression finetuning, which can be impractical due to missing labeled data or high training cost. We propose post-hoc blockwise compensation, called GRAIL, a simple zero-finetuning step applied after model compression that restores each block's input-output behavior using a small calibration set. The method summarizes hidden activations via a Gram matrix and applies ridge regression to linearly reconstruct the original hidden representation from the reduced one. The resulting reconstruction map is absorbed into the downstream projection weights, while the upstream layer is compressed. The approach is selector-agnostic (Magnitude, Wanda, Gram-based selection, or folding), data-aware (requiring only a few forward passes without gradients or labels), and recovers classic pruning or folding when the Gram matrix is near identity, indicating weak inter-channel corre...

Originally published on March 02, 2026. Curated by AI News.

Related Articles

Machine Learning

[R], 31 MILLIONS High frequency data, Light GBM worked perfectly

We just published a paper on predicting adverse selection in high-frequency crypto markets using LightGBM, and I wanted to share it here ...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] Those of you with 10+ years in ML — what is the public completely wrong about?

For those of you who've been in ML/AI research or applied ML for 10+ years — what's the gap between what the public thinks AI is doing vs...

Reddit - Machine Learning · 1 min ·
UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Machine Learning

AI assistants are optimized to seem helpful. That is not the same thing as being helpful.

RLHF trains models on human feedback. Humans rate responses they like. And it turns out humans consistently rate confident, fluent, agree...

Reddit - Artificial Intelligence · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime