[2602.14896] Algorithmic Simplification of Neural Networks with Mosaic-of-Motifs
Summary
This paper explores the algorithmic simplification of neural networks through a method called Mosaic-of-Motifs, demonstrating how structured parameters can lead to effective model compression.
Why It Matters
Understanding the algorithmic complexity of neural networks is crucial for improving model efficiency. This research provides insights into how structured parameterization can enhance compression techniques, which is vital for deploying deep learning models in resource-constrained environments.
Key Takeaways
- Mosaic-of-Motifs (MoMos) simplifies neural network parameters, enhancing compression.
- Trained models exhibit lower algorithmic complexity compared to their initial random states.
- Empirical evidence supports that structured parameterization maintains performance while reducing complexity.
Computer Science > Machine Learning arXiv:2602.14896 (cs) [Submitted on 16 Feb 2026] Title:Algorithmic Simplification of Neural Networks with Mosaic-of-Motifs Authors:Pedram Bakhtiarifard, Tong Chen, Jonathan Wenshøj, Erik B Dam, Raghavendra Selvan View a PDF of the paper titled Algorithmic Simplification of Neural Networks with Mosaic-of-Motifs, by Pedram Bakhtiarifard and 4 other authors View PDF HTML (experimental) Abstract:Large-scale deep learning models are well-suited for compression. Methods like pruning, quantization, and knowledge distillation have been used to achieve massive reductions in the number of model parameters, with marginal performance drops across a variety of architectures and tasks. This raises the central question: \emph{Why are deep neural networks suited for compression?} In this work, we take up the perspective of algorithmic complexity to explain this behavior. We hypothesize that the parameters of trained models have more structure and, hence, exhibit lower algorithmic complexity compared to the weights at (random) initialization. Furthermore, that model compression methods harness this reduced algorithmic complexity to compress models. Although an unconstrained parameterization of model weights, $\mathbf{w} \in \mathbb{R}^n$, can represent arbitrary weight assignments, the solutions found during training exhibit repeatability and structure, making them algorithmically simpler than a generic program. To this end, we formalize the Kolmogorov c...