[2602.13532] Fast Swap-Based Element Selection for Multiplication-Free Dimension Reduction
Summary
This paper presents a fast algorithm for element selection in dimension reduction, eliminating multiplication to enhance efficiency in resource-constrained systems.
Why It Matters
Dimension reduction is crucial for improving model performance and reducing overfitting. This research introduces a multiplication-free method that can significantly accelerate training and inference, making it relevant for applications in machine learning where computational resources are limited.
Key Takeaways
- Proposes a fast, multiplication-free algorithm for element selection.
- Addresses the challenge of determining which elements to retain for effective dimension reduction.
- Utilizes a swap-based local search method to optimize element selection efficiently.
- Demonstrates effectiveness through experiments on MNIST dataset.
- Offers a practical solution for resource-constrained systems in machine learning.
Computer Science > Machine Learning arXiv:2602.13532 (cs) [Submitted on 14 Feb 2026] Title:Fast Swap-Based Element Selection for Multiplication-Free Dimension Reduction Authors:Nobutaka Ono View a PDF of the paper titled Fast Swap-Based Element Selection for Multiplication-Free Dimension Reduction, by Nobutaka Ono View PDF HTML (experimental) Abstract:In this paper, we propose a fast algorithm for element selection, a multiplication-free form of dimension reduction that produces a dimension-reduced vector by simply selecting a subset of elements from the input. Dimension reduction is a fundamental technique for reducing unnecessary model parameters, mitigating overfitting, and accelerating training and inference. A standard approach is principal component analysis (PCA), but PCA relies on matrix multiplications; on resource-constrained systems, the multiplication count itself can become a bottleneck. Element selection eliminates this cost because the reduction consists only of selecting elements, and thus the key challenge is to determine which elements should be retained. We evaluate a candidate subset through the minimum mean-squared error of linear regression that predicts a target vector from the selected elements, where the target may be, for example, a one-hot label vector in classification. When an explicit target is unavailable, the input itself can be used as the target, yielding a reconstruction-based criterion. The resulting optimization is combinatorial, and ex...