[2603.25517] NERO-Net: A Neuroevolutionary Approach for the Design of Adversarially Robust CNNs
About this article
Abstract page for arXiv paper 2603.25517: NERO-Net: A Neuroevolutionary Approach for the Design of Adversarially Robust CNNs
Computer Science > Neural and Evolutionary Computing arXiv:2603.25517 (cs) [Submitted on 26 Mar 2026] Title:NERO-Net: A Neuroevolutionary Approach for the Design of Adversarially Robust CNNs Authors:Inês Valentim, Nuno Antunes, Nuno Lourenço View a PDF of the paper titled NERO-Net: A Neuroevolutionary Approach for the Design of Adversarially Robust CNNs, by In\^es Valentim and 2 other authors View PDF HTML (experimental) Abstract:Neuroevolution automates the complex task of neural network design but often ignores the inherent adversarial fragility of evolved models which is a barrier to adoption in safety-critical scenarios. While robust training methods have received significant attention, the design of architectures exhibiting intrinsic robustness remains largely unexplored. In this paper, we propose NERO-Net, a neuroevolutionary approach to design convolutional neural networks better equipped to resist adversarial attacks. Our search strategy isolates architectural influence on robustness by avoiding adversarial training during the evolutionary loop. As such, our fitness function promotes candidates that, even trained with standard (non-robust) methods, achieve high post-attack accuracy without sacrificing the accuracy on clean samples. We assess NERO-Net on CIFAR-10 with a specific focus on $L_\infty$-robustness. In particular, the fittest individual emerged from evolutionary search with 33% accuracy against FGSM, used as an efficient estimator for robustness during th...