[2603.28594] Detection of Adversarial Attacks in Robotic Perception
About this article
Abstract page for arXiv paper 2603.28594: Detection of Adversarial Attacks in Robotic Perception
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.28594 (cs) [Submitted on 30 Mar 2026] Title:Detection of Adversarial Attacks in Robotic Perception Authors:Ziad Sharawy, Mohammad Nakshbandiand, Sorin Mihai Grigorescu View a PDF of the paper titled Detection of Adversarial Attacks in Robotic Perception, by Ziad Sharawy and Mohammad Nakshbandiand and Sorin Mihai Grigorescu View PDF HTML (experimental) Abstract:Deep Neural Networks (DNNs) achieve strong performance in semantic segmentation for robotic perception but remain vulnerable to adversarial attacks, threatening safety-critical applications. While robustness has been studied for image classification, semantic segmentation in robotic contexts requires specialized architectures and detection strategies. Comments: Subjects: Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Cryptography and Security (cs.CR); Robotics (cs.RO) Cite as: arXiv:2603.28594 [cs.CV] (or arXiv:2603.28594v1 [cs.CV] for this version) https://doi.org/10.48550/arXiv.2603.28594 Focus to learn more arXiv-issued DOI via DataCite (pending registration) Submission history From: Ziad Sharawy [view email] [v1] Mon, 30 Mar 2026 15:41:49 UTC (8,029 KB) Full-text links: Access Paper: View a PDF of the paper titled Detection of Adversarial Attacks in Robotic Perception, by Ziad Sharawy and Mohammad Nakshbandiand and Sorin Mihai GrigorescuView PDFHTML (experimental)TeX Source view license Current browse cont...