[2602.20467] Elimination-compensation pruning for fully-connected neural networks

[2602.20467] Elimination-compensation pruning for fully-connected neural networks

arXiv - Machine Learning 4 min read Article

Summary

This paper introduces a novel pruning method for fully-connected neural networks, which compensates for the removal of weights by adjusting adjacent biases, enhancing model efficiency without sacrificing performance.

Why It Matters

As deep learning models grow in complexity, efficient pruning techniques are essential for reducing computational costs and improving deployment in resource-constrained environments. This research offers a new approach that could lead to more effective model optimization strategies.

Key Takeaways

  • The proposed method integrates weight removal with bias compensation to maintain network performance.
  • Analytical expressions derived for weight importance enhance the efficiency of pruning.
  • Numerical experiments show the method's effectiveness compared to traditional pruning strategies.

Computer Science > Machine Learning arXiv:2602.20467 (cs) [Submitted on 24 Feb 2026] Title:Elimination-compensation pruning for fully-connected neural networks Authors:Enrico Ballini, Luca Muscarnera, Alessio Fumagalli, Anna Scotti, Francesco Regazzoni View a PDF of the paper titled Elimination-compensation pruning for fully-connected neural networks, by Enrico Ballini and 4 other authors View PDF HTML (experimental) Abstract:The unmatched ability of Deep Neural Networks in capturing complex patterns in large and noisy datasets is often associated with their large hypothesis space, and consequently to the vast amount of parameters that characterize model architectures. Pruning techniques affirmed themselves as valid tools to extract sparse representations of neural networks parameters, carefully balancing between compression and preservation of information. However, a fundamental assumption behind pruning is that expendable weights should have small impact on the error of the network, while highly important weights should tend to have a larger influence on the inference. We argue that this idea could be generalized; what if a weight is not simply removed but also compensated with a perturbation of the adjacent bias, which does not contribute to the network sparsity? Our work introduces a novel pruning method in which the importance measure of each weight is computed considering the output behavior after an optimal perturbation of its adjacent bias, efficiently computable b...

Related Articles

Machine Learning

[D] Budget Machine Learning Hardware

Looking to get into machine learning and found this video on a piece of hardware for less than £500. Is it really possible to teach auton...

Reddit - Machine Learning · 1 min ·
UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Machine Learning

Your prompts aren’t the problem — something else is

I keep seeing people focus heavily on prompt optimization. But in practice, a lot of failures I’ve observed don’t come from the prompt it...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

[R], 31 MILLIONS High frequency data, Light GBM worked perfectly

We just published a paper on predicting adverse selection in high-frequency crypto markets using LightGBM, and I wanted to share it here ...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime