[2602.21961] Robustness in sparse artificial neural networks trained with adaptive topology

[2602.21961] Robustness in sparse artificial neural networks trained with adaptive topology

arXiv - Machine Learning 3 min read Article

Summary

This paper explores the robustness of sparse artificial neural networks with adaptive topology, demonstrating their competitive performance in image classification tasks despite high sparsity.

Why It Matters

As deep learning models grow in complexity, ensuring their robustness and efficiency is crucial. This research highlights how adaptive topology in sparse networks can maintain high accuracy while reducing computational load, making it relevant for both academic and practical applications in AI.

Key Takeaways

  • Adaptive topology in sparse networks enhances robustness.
  • The proposed architecture achieves competitive accuracy with 99% sparsity.
  • Robustness is tested against various perturbations, including adversarial attacks.
  • The findings support the potential of sparse networks for efficient deep learning.
  • This research contributes to the ongoing discourse on AI model efficiency and reliability.

Computer Science > Machine Learning arXiv:2602.21961 (cs) [Submitted on 25 Feb 2026] Title:Robustness in sparse artificial neural networks trained with adaptive topology Authors:Bendegúz Sulyok, Gergely Palla, Filippo Radicchi, Santo Fortunato View a PDF of the paper titled Robustness in sparse artificial neural networks trained with adaptive topology, by Bendeg\'uz Sulyok and Gergely Palla and Filippo Radicchi and Santo Fortunato View PDF HTML (experimental) Abstract:We investigate the robustness of sparse artificial neural networks trained with adaptive topology. We focus on a simple yet effective architecture consisting of three sparse layers with 99% sparsity followed by a dense layer, applied to image classification tasks such as MNIST and Fashion MNIST. By updating the topology of the sparse layers between each epoch, we achieve competitive accuracy despite the significantly reduced number of weights. Our primary contribution is a detailed analysis of the robustness of these networks, exploring their performance under various perturbations including random link removal, adversarial attack, and link weight shuffling. Through extensive experiments, we demonstrate that adaptive topology not only enhances efficiency but also maintains robustness. This work highlights the potential of adaptive sparse networks as a promising direction for developing efficient and reliable deep learning models. Subjects: Machine Learning (cs.LG); Physics and Society (physics.soc-ph) Cite as...

Related Articles

Llms

[P] I built an autonomous ML agent that runs experiments on tabular data indefinitely - inspired by Karpathy's AutoResearch

Inspired by Andrej Karpathy's AutoResearch, I built a system where Claude Code acts as an autonomous ML researcher on tabular binary clas...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] Data curation and targeted replacement as a pre-training alignment and controllability method

Hi, r/MachineLearning: has much research been done in large-scale training scenarios where undesirable data has been replaced before trai...

Reddit - Machine Learning · 1 min ·
Llms

[R] BraiNN: An Experimental Neural Architecture with Working Memory, Relational Reasoning, and Adaptive Learning

BraiNN An Experimental Neural Architecture with Working Memory, Relational Reasoning, and Adaptive Learning BraiNN is a compact research‑...

Reddit - Machine Learning · 1 min ·
Machine Learning

[HIRING]Remote AI Training Jobs -Up to $1K/Week| Collaborators Wanted.USA

submitted by /u/nortonakenga [link] [comments]

Reddit - ML Jobs · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime