[2509.22075] CoSpaDi: Compressing LLMs via Calibration-Guided Sparse Dictionary Learning

[2509.22075] CoSpaDi: Compressing LLMs via Calibration-Guided Sparse Dictionary Learning

arXiv - AI 4 min read Article

Summary

The paper introduces CoSpaDi, a novel framework for compressing large language models (LLMs) using calibration-guided sparse dictionary learning, improving accuracy and efficiency over traditional methods.

Why It Matters

As LLMs grow in size, efficient compression techniques are crucial for deployment in resource-constrained environments. CoSpaDi offers a promising alternative to existing methods, potentially enhancing model performance while reducing computational costs.

Key Takeaways

  • CoSpaDi replaces low-rank factorization with structured sparse decomposition for better model expressiveness.
  • The framework minimizes functional reconstruction error using a calibration set, enhancing accuracy.
  • CoSpaDi achieves 20-40% compression ratios while maintaining performance, outperforming SVD-based methods.

Computer Science > Computation and Language arXiv:2509.22075 (cs) [Submitted on 26 Sep 2025 (v1), last revised 19 Feb 2026 (this version, v4)] Title:CoSpaDi: Compressing LLMs via Calibration-Guided Sparse Dictionary Learning Authors:Denis Makhov, Dmitriy Shopkhoev, Magauiya Zhussip, Ammar Ali, Stamatios Lefkimmiatis View a PDF of the paper titled CoSpaDi: Compressing LLMs via Calibration-Guided Sparse Dictionary Learning, by Denis Makhov and 4 other authors View PDF HTML (experimental) Abstract:Post-training compression of large language models (LLMs) often relies on low-rank weight approximations that represent each column of the weight matrix in a shared low-dimensional subspace. This strategy is computationally efficient but the underlying constraint can be overly rigid for heterogeneous projection weights and may incur avoidable accuracy loss. We propose CoSpaDi (Compression via Sparse Dictionary Learning), a training-free framework that replaces low-rank factorization with a structured sparse decomposition in which each weight matrix is represented as a dense dictionary multiplied by a column-sparse coefficient matrix. This yields a union-of-subspaces model: the columns of the weight matrix are represented as linear combinations of different subsets of dictionary atoms, improving expressiveness at a fixed parameter budget. CoSpaDi is calibration-guided: using a small calibration set, we optimize the factorization to minimize functional reconstruction error of layer ou...

Related Articles

Tubi is the first streamer to launch a native app within ChatGPT | TechCrunch
Llms

Tubi is the first streamer to launch a native app within ChatGPT | TechCrunch

Tubi becomes the first streaming service to offer an app integration within ChatGPT, the AI chatbot that millions of users turn to for an...

TechCrunch - AI · 3 min ·
Llms

Anyone out there use Claude Pro/Max at the same time on different screens?

I am asking for feedback ? I’m currently using a Claude paid plan (Pro/Max) and was wondering about the logistics of simultaneous use. Sp...

Reddit - Artificial Intelligence · 1 min ·
Llms

[R] The Lyra Technique — A framework for interpreting internal cognitive states in LLMs (Zenodo, open access)

We're releasing a paper on a new framework for reading and interpreting the internal cognitive states of large language models: "The Lyra...

Reddit - Machine Learning · 1 min ·
Llms

Looking to build a production-level AI/ML project (agentic systems), need guidance on what to build

Hi everyone, I’m a final-year undergraduate AI/ML student currently focusing on applied AI / agentic systems. So far, I’ve spent time und...

Reddit - ML Jobs · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime