[2505.11076] Addition is almost all you need: Compressing large language models with double binary factorization
About this article
Abstract page for arXiv paper 2505.11076: Addition is almost all you need: Compressing large language models with double binary factorization
Computer Science > Machine Learning arXiv:2505.11076 (cs) [Submitted on 16 May 2025 (v1), last revised 28 Feb 2026 (this version, v4)] Title:Addition is almost all you need: Compressing large language models with double binary factorization Authors:Vladimír Boža, Vladimír Macko View a PDF of the paper titled Addition is almost all you need: Compressing large language models with double binary factorization, by Vladim\'ir Bo\v{z}a and 1 other authors View PDF HTML (experimental) Abstract:Binary quantization approaches, which replace weight matrices with binary matrices and substitute costly multiplications with cheaper additions, offer a computationally efficient approach to address the increasing computational and storage requirements of Large Language Models (LLMs). However, the severe quantization constraint ($\pm1$) can lead to significant accuracy degradation. In this paper, we propose Double Binary Factorization (DBF), a novel method that factorizes dense weight matrices into products of two binary (sign) matrices, each accompanied by scaling vectors. DBF preserves the efficiency advantages of binary representations while achieving compression rates that are competitive with or superior to state-of-the-art methods. Specifically, in a 1-bit per weight range, DBF is better than existing binarization approaches. In a 2-bit per weight range, DBF is competitive with the best quantization methods like QuIP# and QTIP. Unlike most existing compression techniques, which offer ...