[2604.04349] Adversarial Robustness Analysis of Cloud-Assisted Autonomous Driving Systems
About this article
Abstract page for arXiv paper 2604.04349: Adversarial Robustness Analysis of Cloud-Assisted Autonomous Driving Systems
Computer Science > Robotics arXiv:2604.04349 (cs) [Submitted on 6 Apr 2026] Title:Adversarial Robustness Analysis of Cloud-Assisted Autonomous Driving Systems Authors:Maher Al Islam, Amr S. El-Wakeel View a PDF of the paper titled Adversarial Robustness Analysis of Cloud-Assisted Autonomous Driving Systems, by Maher Al Islam and 1 other authors View PDF HTML (experimental) Abstract:Autonomous vehicles increasingly rely on deep learning-based perception and control, which impose substantial computational demands. Cloud-assisted architectures offload these functions to remote servers, enabling enhanced perception and coordinated decision-making through the Internet of Vehicles (IoV). However, this paradigm introduces cross-layer vulnerabilities, where adversarial manipulation of perception models and network impairments in the vehicle-cloud link can jointly undermine safety-critical autonomy. This paper presents a hardware-in-the-loop IoV testbed that integrates real-time perception, control, and communication to evaluate such vulnerabilities in cloud-assisted autonomous driving. A YOLOv8-based object detector deployed on the cloud is subjected to whitebox adversarial attacks using the Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD), while network adversaries induce delay and packet loss in the vehicle-cloud loop. Results show that adversarial perturbations significantly degrade perception performance, with PGD reducing detection precision and recall fr...