[2604.09489] XFED: Non-Collusive Model Poisoning Attack Against Byzantine-Robust Federated Classifiers
About this article
Abstract page for arXiv paper 2604.09489: XFED: Non-Collusive Model Poisoning Attack Against Byzantine-Robust Federated Classifiers
Computer Science > Cryptography and Security arXiv:2604.09489 (cs) [Submitted on 10 Apr 2026] Title:XFED: Non-Collusive Model Poisoning Attack Against Byzantine-Robust Federated Classifiers Authors:Israt Jahan Mouri, Muhammad Ridowan, Muhammad Abdullah Adnan View a PDF of the paper titled XFED: Non-Collusive Model Poisoning Attack Against Byzantine-Robust Federated Classifiers, by Israt Jahan Mouri and 2 other authors View PDF Abstract:Model poisoning attacks pose a significant security threat to Federated Learning (FL). Most existing model poisoning attacks rely on collusion, requiring adversarial clients to coordinate by exchanging local benign models and synchronizing the generation of their poisoned updates. However, sustaining such coordination is increasingly impractical in real-world FL deployments, as it effectively requires botnet-like control over many devices. This approach is costly to maintain and highly vulnerable to detection. This context raises a fundamental question: Can model poisoning attacks remain effective without any communication between attackers? To address this challenge, we introduce and formalize the \textbf{non-collusive attack model}, in which all compromised clients share a common adversarial objective but operate independently. Under this model, each attacker generates its malicious update without communicating with other adversaries, accessing other clients' updates, or relying on any knowledge of server-side defenses. To demonstrate the ...