[2602.16309] The Weight of a Bit: EMFI Sensitivity Analysis of Embedded Deep Learning Models
Summary
This article investigates the impact of different number representations on the vulnerability of embedded deep learning models to electromagnetic fault injection (EMFI) attacks, revealing that integer representations offer better resilience than floating-point formats.
Why It Matters
As embedded systems increasingly utilize deep learning, understanding their vulnerabilities to fault injection attacks is crucial for ensuring security and reliability. This study provides insights into how different numerical representations can mitigate risks, which is vital for developers and researchers in AI security.
Key Takeaways
- Floating-point representations are highly susceptible to EMFI attacks, leading to significant accuracy degradation.
- Integer representations, especially 8-bit, show improved resilience against fault injections.
- The study evaluates four popular image classifiers, providing practical insights for embedded AI applications.
- Understanding number representation impacts can guide developers in enhancing model security.
- This research fills a gap in existing literature regarding EMFI sensitivity analysis in deep learning.
Computer Science > Cryptography and Security arXiv:2602.16309 (cs) [Submitted on 18 Feb 2026] Title:The Weight of a Bit: EMFI Sensitivity Analysis of Embedded Deep Learning Models Authors:Jakub Breier, Štefan Kučerák, Xiaolu Hou View a PDF of the paper titled The Weight of a Bit: EMFI Sensitivity Analysis of Embedded Deep Learning Models, by Jakub Breier and 2 other authors View PDF HTML (experimental) Abstract:Fault injection attacks on embedded neural network models have been shown as a potent threat. Numerous works studied resilience of models from various points of view. As of now, there is no comprehensive study that would evaluate the influence of number representations used for model parameters against electromagnetic fault injection (EMFI) attacks. In this paper, we investigate how four different number representations influence the success of an EMFI attack on embedded neural network models. We chose two common floating-point representations (32-bit, and 16-bit), and two integer representations (8-bit, and 4-bit). We deployed four common image classifiers, ResNet-18, ResNet-34, ResNet-50, and VGG-11, on an embedded memory chip, and utilized a low-cost EMFI platform to trigger faults. Our results show that while floating-point representations exhibit almost a complete degradation in accuracy (Top-1 and Top-5) after a single fault injection, integer representations offer better resistance overall. Especially, when considering the the 8-bit representation on a relati...