[2602.22488] Explainability-Aware Evaluation of Transfer Learning Models for IoT DDoS Detection Under Resource Constraints

[2602.22488] Explainability-Aware Evaluation of Transfer Learning Models for IoT DDoS Detection Under Resource Constraints

arXiv - AI 3 min read Article

Summary

This article evaluates transfer learning models for IoT DDoS detection, focusing on explainability and resource constraints. It analyzes various CNN architectures to determine their performance and interpretability in real-world applications.

Why It Matters

As IoT devices proliferate, they become increasingly vulnerable to DDoS attacks. Understanding the reliability and interpretability of detection models is crucial for deploying effective security measures in resource-constrained environments. This study provides insights into selecting suitable models that balance performance and explainability.

Key Takeaways

  • DenseNet and MobileNet architectures excel in DDoS detection performance and reliability.
  • The study emphasizes the need for explainability in AI models used for security applications.
  • MobileNetV3 offers a favorable latency-accuracy trade-off for fog-level deployment.
  • Integrating performance metrics with interpretability assessments enhances model selection.
  • The findings guide practitioners in choosing deep learning models that meet operational constraints.

Computer Science > Cryptography and Security arXiv:2602.22488 (cs) [Submitted on 25 Feb 2026] Title:Explainability-Aware Evaluation of Transfer Learning Models for IoT DDoS Detection Under Resource Constraints Authors:Nelly Elsayed View a PDF of the paper titled Explainability-Aware Evaluation of Transfer Learning Models for IoT DDoS Detection Under Resource Constraints, by Nelly Elsayed View PDF HTML (experimental) Abstract:Distributed denial-of-service (DDoS) attacks threaten the availability of Internet of Things (IoT) infrastructures, particularly under resource-constrained deployment conditions. Although transfer learning models have shown promising detection accuracy, their reliability, computational feasibility, and interpretability in operational environments remain insufficiently explored. This study presents an explainability-aware empirical evaluation of seven pre-trained convolutional neural network architectures for multi-class IoT DDoS detection using the CICDDoS2019 dataset and an image-based traffic representation. The analysis integrates performance metrics, reliability-oriented statistics (MCC, Youden Index, confidence intervals), latency and training cost assessment, and interpretability evaluation using Grad-CAM and SHAP. Results indicate that DenseNet and MobileNet-based architectures achieve strong detection performance while demonstrating superior reliability and compact, class-consistent attribution patterns. DenseNet169 offers the strongest reliabi...

Related Articles

Machine Learning

I tried building a memory-first AI… and ended up discovering smaller models can beat larger ones

Dataset Model Acc F1 Δ vs Log Δ vs Static Avg Params Peak Params Steps Infer ms Size Banking77-20 Logistic TF-IDF 92.37% 0.9230 +0.00pp +...

Reddit - Artificial Intelligence · 1 min ·
Llms

[D] Howcome Muon is only being used for Transformers?

Muon has quickly been adopted in LLM training, yet we don't see it being talked about in other contexts. Searches for Muon on ConvNets tu...

Reddit - Machine Learning · 1 min ·
Machine Learning

[P] Run Karpathy's Autoresearch for $0.44 instead of $24 — Open-source parallel evolution pipeline on SageMaker Spot

TL;DR: I built an open-source pipeline that runs Karpathy's autoresearch on SageMaker Spot instances — 25 autonomous ML experiments for $...

Reddit - Machine Learning · 1 min ·
Improving AI models’ ability to explain their predictions
Machine Learning

Improving AI models’ ability to explain their predictions

AI News - General · 9 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime