[2603.23985] Diet Your LLM: Dimension-wise Global Pruning of LLMs via Merging Task-specific Importance Score
About this article
Abstract page for arXiv paper 2603.23985: Diet Your LLM: Dimension-wise Global Pruning of LLMs via Merging Task-specific Importance Score
Computer Science > Machine Learning arXiv:2603.23985 (cs) [Submitted on 25 Mar 2026] Title:Diet Your LLM: Dimension-wise Global Pruning of LLMs via Merging Task-specific Importance Score Authors:Jimyung Hong, Jaehyung Kim View a PDF of the paper titled Diet Your LLM: Dimension-wise Global Pruning of LLMs via Merging Task-specific Importance Score, by Jimyung Hong and Jaehyung Kim View PDF HTML (experimental) Abstract:Large language models (LLMs) have demonstrated remarkable capabilities, but their massive scale poses significant challenges for practical deployment. Structured pruning offers a promising solution by removing entire dimensions or layers, yet existing methods face critical trade-offs: task-agnostic approaches cannot adapt to task-specific requirements, while task-aware methods require costly training to learn task adaptability. We propose DIET (Dimension-wise global pruning of LLMs via merging Task-wise importance scores), a training-free structured pruning method that combines dimension-level granularity with task-aware selection. DIET profiles activation magnitudes across tasks using only 100 samples per task, then applies majority voting to construct a single global mask. DIET does not require large costs from pre-computation or training. Experiments on seven zero-shot benchmarks using Gemma-2 2B and 9B models demonstrate the effectiveness of DIET; for example, at 20% sparsity on Gemma-2 2B, DIET achieves near 10% average accuracy improvement, compared to p...