[2507.04517] DOTResize: Reducing LLM Width via Discrete Optimal Transport-based Neuron Merging
Summary
The paper presents DOTResize, a novel method for reducing the width of Large Language Models (LLMs) through Discrete Optimal Transport-based neuron merging, enhancing model efficiency without losing critical information.
Why It Matters
As LLMs grow in size, optimizing their architecture for efficiency becomes crucial. DOTResize offers a fresh perspective by focusing on neuron merging rather than traditional pruning, potentially leading to significant reductions in computational costs while preserving model performance.
Key Takeaways
- DOTResize utilizes Discrete Optimal Transport to merge neurons, enhancing model efficiency.
- The method allows for the retention and redistribution of useful signals in LLMs.
- Empirical results indicate that DOTResize can complement existing pruning techniques.
- The approach may lead to measurable reductions in computational costs.
- Incorporates entropic regularization and matrix factorization for improved performance.
Computer Science > Machine Learning arXiv:2507.04517 (cs) [Submitted on 6 Jul 2025 (v1), last revised 24 Feb 2026 (this version, v2)] Title:DOTResize: Reducing LLM Width via Discrete Optimal Transport-based Neuron Merging Authors:Neha Verma, Kenton Murray, Kevin Duh View a PDF of the paper titled DOTResize: Reducing LLM Width via Discrete Optimal Transport-based Neuron Merging, by Neha Verma and 2 other authors View PDF HTML (experimental) Abstract:Structured pruning methods designed for Large Language Models (LLMs) generally focus on identifying and removing the least important components to optimize model size. However, in this work, we question this prevalent approach by instead exploring how to recombine information from structures designated for pruning back into the reduced model. We specifically focus on neuron width reduction, and frame this problem as a Discrete Optimal Transport problem, and propose DOTResize, a novel Transformer compression method that uses optimal transport theory to transform and compress model width. To ensure applicability within the Transformer architecture, we motivate and incorporate necessary entropic regularization and matrix factorization techniques into the transportation maps produced by our method. Unlike pruning-based approaches which discard neurons based on importance measures, DOTResize re-projects the entire neuron width, allowing the retention and redistribution of useful signal across the reduced layer. Empirical results show...