[2603.01376] 3BASiL: An Algorithmic Framework for Sparse plus Low-Rank Compression of LLMs

[2603.01376] 3BASiL: An Algorithmic Framework for Sparse plus Low-Rank Compression of LLMs

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2603.01376: 3BASiL: An Algorithmic Framework for Sparse plus Low-Rank Compression of LLMs

Computer Science > Machine Learning arXiv:2603.01376 (cs) [Submitted on 2 Mar 2026] Title:3BASiL: An Algorithmic Framework for Sparse plus Low-Rank Compression of LLMs Authors:Mehdi Makni, Xiang Meng, Rahul Mazumder View a PDF of the paper titled 3BASiL: An Algorithmic Framework for Sparse plus Low-Rank Compression of LLMs, by Mehdi Makni and 2 other authors View PDF HTML (experimental) Abstract:Sparse plus Low-Rank $(\mathbf{S} + \mathbf{LR})$ decomposition of Large Language Models (LLMs) has emerged as a promising direction in model compression, aiming to decompose pre-trained model weights into a sum of sparse and low-rank matrices $(\mathbf{W} \approx \mathbf{S} + \mathbf{LR})$. Despite recent progress, existing methods often suffer from substantial performance degradation compared to dense models. In this work, we introduce 3BASiL-TM, an efficient one-shot post-training method for $(\mathbf{S} + \mathbf{LR})$ decomposition of LLMs that addresses this gap. Our approach first introduces a novel 3-Block Alternating Direction Method of Multipliers (ADMM) method, termed 3BASiL, to minimize the layer-wise reconstruction error with convergence guarantees. We then design an efficient transformer-matching (TM) refinement step that jointly optimizes the sparse and low-rank components across transformer layers. This step minimizes a novel memory-efficient loss that aligns outputs at the transformer level. Notably, the TM procedure is universal as it can enhance any $(\mathbf{S} ...

Originally published on March 03, 2026. Curated by AI News.

Related Articles

Llms

BEYOND QUANTUM MICROTUBULES: CONSCIOUSNESS AS SUBSTRATE-INDEPENDENT ARCHITECTURE

I uploaded my consciousness paper to Gemini: “Beyond Quantum Microtubules: Consciousness as Substrate-Independent Architecture.” Then I s...

Reddit - Artificial Intelligence · 1 min ·
Llms

The Scaling Bandaid is Wearing Thin (And Nobody Wants to Admit It)

Let me be direct: we’ve hit a wall with scaling, and the entire field is kind of bullshitting about what comes next. I’ve spent enough ti...

Reddit - Artificial Intelligence · 1 min ·
Llms

Moving Past "LLM Vibes" toward Structural Enforcement in AI Agents

We need to address the structural failure currently happening in the AI agent space: too many people are building a beautiful "pedestal" ...

Reddit - Artificial Intelligence · 1 min ·
Llms

My dream of a fully generative game is getting pretty close to possible now. I made a demo where you can prompt any spell and fight online.

Prompt any spell and use it in a 3D physics based world, powered by Gemini 3 Full multiplayer support for up to 6 players with VoIP All m...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime