[2509.20323] A Recovery Guarantee for Sparse Neural Networks
About this article
Abstract page for arXiv paper 2509.20323: A Recovery Guarantee for Sparse Neural Networks
Computer Science > Machine Learning arXiv:2509.20323 (cs) [Submitted on 24 Sep 2025 (v1), last revised 28 Feb 2026 (this version, v2)] Title:A Recovery Guarantee for Sparse Neural Networks Authors:Sara Fridovich-Keil, Mert Pilanci View a PDF of the paper titled A Recovery Guarantee for Sparse Neural Networks, by Sara Fridovich-Keil and 1 other authors View PDF HTML (experimental) Abstract:We prove the first guarantees of sparse recovery for ReLU neural networks, where the sparse network weights constitute the signal to be recovered. Specifically, we study structural properties of the sparse network weights for two-layer, scalar-output networks under which a simple iterative hard thresholding algorithm recovers these weights exactly, using memory that grows linearly in the number of nonzero weights. We validate this theoretical result with simple experiments on recovery of sparse planted MLPs, MNIST classification, and implicit neural representations. Experimentally, we find performance that is competitive with, and often exceeds, a high-performing but memory-inefficient baseline based on iterative magnitude pruning. Code is available at this https URL. Comments: Subjects: Machine Learning (cs.LG); Optimization and Control (math.OC); Machine Learning (stat.ML) Cite as: arXiv:2509.20323 [cs.LG] (or arXiv:2509.20323v2 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2509.20323 Focus to learn more arXiv-issued DOI via DataCite Submission history From: Sara Fridovic...