[2501.05633] Regularized Top-$k$: A Bayesian Framework for Gradient Sparsification
Summary
The paper presents a Bayesian framework for gradient sparsification called Regularized Top-k (RegTop-k), which improves convergence in distributed machine learning by optimizing the selection of gradient entries based on accumulated error statistics.
Why It Matters
As machine learning models grow in complexity, efficient training methods become crucial. This research addresses the challenge of gradient sparsification, which can enhance performance in distributed settings, making it highly relevant for practitioners in the field of machine learning and AI.
Key Takeaways
- RegTop-k optimizes gradient selection using Bayesian principles.
- It significantly improves convergence rates compared to traditional Top-k methods.
- The method is validated through experiments on distributed linear regression and computer vision models.
- Higher compression ratios lead to better performance with RegTop-k.
- This framework has implications for enhancing efficiency in large-scale machine learning tasks.
Computer Science > Machine Learning arXiv:2501.05633 (cs) [Submitted on 10 Jan 2025 (v1), last revised 15 Feb 2026 (this version, v2)] Title:Regularized Top-$k$: A Bayesian Framework for Gradient Sparsification Authors:Ali Bereyhi, Ben Liang, Gary Boudreau, Ali Afana View a PDF of the paper titled Regularized Top-$k$: A Bayesian Framework for Gradient Sparsification, by Ali Bereyhi and Ben Liang and Gary Boudreau and Ali Afana View PDF Abstract:Error accumulation is effective for gradient sparsification in distributed settings: initially-unselected gradient entries are eventually selected as their accumulated error exceeds a certain level. The accumulation essentially behaves as a scaling of the learning rate for the selected entries. Although this property prevents the slow-down of lateral movements in distributed gradient descent, it can deteriorate convergence in some settings. This work proposes a novel sparsification scheme that controls the learning rate scaling of error accumulation. The development of this scheme follows two major steps: first, gradient sparsification is formulated as an inverse probability (inference) problem, and the Bayesian optimal sparsification mask is derived as a maximum-a-posteriori estimator. Using the prior distribution inherited from Top-k, we derive a new sparsification algorithm which can be interpreted as a regularized form of Top-k. We call this algorithm regularized Top-k (RegTop-k). It utilizes past aggregated gradients to evaluat...