[2604.04493] SLaB: Sparse-Lowrank-Binary Decomposition for Efficient Large Language Models
About this article
Abstract page for arXiv paper 2604.04493: SLaB: Sparse-Lowrank-Binary Decomposition for Efficient Large Language Models
Computer Science > Machine Learning arXiv:2604.04493 (cs) [Submitted on 6 Apr 2026] Title:SLaB: Sparse-Lowrank-Binary Decomposition for Efficient Large Language Models Authors:Ziwei Li, Yuang Ma, Yi Kang View a PDF of the paper titled SLaB: Sparse-Lowrank-Binary Decomposition for Efficient Large Language Models, by Ziwei Li and 2 other authors View PDF HTML (experimental) Abstract:The rapid growth of large language models (LLMs) presents significant deployment challenges due to their massive computational and memory demands. While model compression, such as network pruning, offers potential solutions, most existing methods often fail to maintain good performance at high compression ratios. To address this, we propose SLaB, a novel framework that decomposes each linear layer weight into three complementary components: a sparse matrix, a low-rank matrix, and a binary matrix. SLaB eliminates the need for retraining and leverages activation-aware pruning scores to guide the decomposition process. Experiments on Llama-family models demonstrate that SLaB achieves state-of-the-art performance, reducing perplexity by up to 36% compared to existing methods at 50% compression and improving accuracy by up to 8.98% over the baseline on zero-shot tasks. Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI) Cite as: arXiv:2604.04493 [cs.LG] (or arXiv:2604.04493v1 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2604.04493 Focus to learn more arXiv-issued DOI vi...