[2505.18323] Architectural Backdoors for Within-Batch Data Stealing and Model Inference Manipulation
About this article
Abstract page for arXiv paper 2505.18323: Architectural Backdoors for Within-Batch Data Stealing and Model Inference Manipulation
Computer Science > Cryptography and Security arXiv:2505.18323 (cs) [Submitted on 23 May 2025 (v1), last revised 20 Mar 2026 (this version, v2)] Title:Architectural Backdoors for Within-Batch Data Stealing and Model Inference Manipulation Authors:Nicolas Küchler, Ivan Petrov, Conrad Grobler, Ilia Shumailov View a PDF of the paper titled Architectural Backdoors for Within-Batch Data Stealing and Model Inference Manipulation, by Nicolas K\"uchler and 3 other authors View PDF HTML (experimental) Abstract:For nearly a decade the academic community has investigated backdoors in neural networks, primarily focusing on classification tasks where adversaries manipulate the model prediction. While demonstrably malicious, the immediate real-world impact of such prediction-altering attacks has remained unclear. In this paper we introduce a novel and significantly more potent class of backdoors that builds upon recent advancements in architectural backdoors. We demonstrate how these backdoors can be specifically engineered to exploit batched inference, a common technique for hardware utilization, enabling large-scale user data manipulation and theft. By targeting the batching process, these architectural backdoors facilitate information leakage between concurrent user requests and allow attackers to fully control model responses directed at other users within the same batch. In other words, an attacker who can change the model architecture can set and steal model inputs and outputs of o...