[2602.15161] Exploiting Layer-Specific Vulnerabilities to Backdoor Attack in Federated Learning
Summary
This paper presents the Layer Smoothing Attack (LSA), a novel backdoor attack exploiting layer-specific vulnerabilities in federated learning systems, achieving high success rates while evading detection.
Why It Matters
As federated learning becomes more prevalent for secure data processing, understanding its vulnerabilities is crucial. This research highlights significant security flaws and emphasizes the need for improved defenses that are aware of layer-specific risks, impacting future AI security protocols.
Key Takeaways
- The Layer Smoothing Attack (LSA) targets vulnerabilities in neural network layers.
- LSA can achieve a backdoor success rate of up to 97% without compromising model accuracy.
- Current federated learning defenses are inadequate against layer-specific attacks.
- Identifying backdoor-critical layers is essential for developing effective security measures.
- Future defenses must incorporate layer-aware detection strategies.
Computer Science > Cryptography and Security arXiv:2602.15161 (cs) [Submitted on 16 Feb 2026] Title:Exploiting Layer-Specific Vulnerabilities to Backdoor Attack in Federated Learning Authors:Mohammad Hadi Foroughi, Seyed Hamed Rastegar, Mohammad Sabokrou, Ahmad Khonsari View a PDF of the paper titled Exploiting Layer-Specific Vulnerabilities to Backdoor Attack in Federated Learning, by Mohammad Hadi Foroughi and 2 other authors View PDF HTML (experimental) Abstract:Federated learning (FL) enables distributed model training across edge devices while preserving data locality. This decentralized approach has emerged as a promising solution for collaborative learning on sensitive user data, effectively addressing the longstanding privacy concerns inherent in centralized systems. However, the decentralized nature of FL exposes new security vulnerabilities, especially backdoor attacks that threaten model integrity. To investigate this critical concern, this paper presents the Layer Smoothing Attack (LSA), a novel backdoor attack that exploits layer-specific vulnerabilities in neural networks. First, a Layer Substitution Analysis methodology systematically identifies backdoor-critical (BC) layers that contribute most significantly to backdoor success. Subsequently, LSA strategically manipulates these BC layers to inject persistent backdoors while remaining undetected by state-of-the-art defense mechanisms. Extensive experiments across diverse model architectures and datasets demo...