[2602.21961] Robustness in sparse artificial neural networks trained with adaptive topology
Summary
This paper explores the robustness of sparse artificial neural networks with adaptive topology, demonstrating their competitive performance in image classification tasks despite high sparsity.
Why It Matters
As deep learning models grow in complexity, ensuring their robustness and efficiency is crucial. This research highlights how adaptive topology in sparse networks can maintain high accuracy while reducing computational load, making it relevant for both academic and practical applications in AI.
Key Takeaways
- Adaptive topology in sparse networks enhances robustness.
- The proposed architecture achieves competitive accuracy with 99% sparsity.
- Robustness is tested against various perturbations, including adversarial attacks.
- The findings support the potential of sparse networks for efficient deep learning.
- This research contributes to the ongoing discourse on AI model efficiency and reliability.
Computer Science > Machine Learning arXiv:2602.21961 (cs) [Submitted on 25 Feb 2026] Title:Robustness in sparse artificial neural networks trained with adaptive topology Authors:Bendegúz Sulyok, Gergely Palla, Filippo Radicchi, Santo Fortunato View a PDF of the paper titled Robustness in sparse artificial neural networks trained with adaptive topology, by Bendeg\'uz Sulyok and Gergely Palla and Filippo Radicchi and Santo Fortunato View PDF HTML (experimental) Abstract:We investigate the robustness of sparse artificial neural networks trained with adaptive topology. We focus on a simple yet effective architecture consisting of three sparse layers with 99% sparsity followed by a dense layer, applied to image classification tasks such as MNIST and Fashion MNIST. By updating the topology of the sparse layers between each epoch, we achieve competitive accuracy despite the significantly reduced number of weights. Our primary contribution is a detailed analysis of the robustness of these networks, exploring their performance under various perturbations including random link removal, adversarial attack, and link weight shuffling. Through extensive experiments, we demonstrate that adaptive topology not only enhances efficiency but also maintains robustness. This work highlights the potential of adaptive sparse networks as a promising direction for developing efficient and reliable deep learning models. Subjects: Machine Learning (cs.LG); Physics and Society (physics.soc-ph) Cite as...