[2602.13271] Human-Centered Explainable AI for Security Enhancement: A Deep Intrusion Detection Framework
Summary
This paper presents a novel intrusion detection framework that integrates Explainable AI (XAI) to enhance the interpretability and performance of cybersecurity systems, demonstrating superior results over traditional models.
Why It Matters
As cyber threats continue to evolve, the need for effective and interpretable intrusion detection systems becomes critical. This research highlights the importance of combining deep learning with explainability, ensuring that security analysts can trust and understand AI decisions, ultimately improving cybersecurity measures.
Key Takeaways
- The proposed framework integrates CNN and LSTM for effective intrusion detection.
- Achieved high accuracy (0.99) and superior performance metrics compared to traditional IDS.
- Incorporated SHAP for model interpretability, aiding security analysts in understanding decisions.
- Highlighted key features influencing model decisions, enhancing trust in AI systems.
- Recommended future enhancements for real-time adaptive learning in threat detection.
Computer Science > Artificial Intelligence arXiv:2602.13271 (cs) [Submitted on 4 Feb 2026] Title:Human-Centered Explainable AI for Security Enhancement: A Deep Intrusion Detection Framework Authors:Md Muntasir Jahid Ayan, Md. Shahriar Rashid, Tazzina Afroze Hassan, Hossain Md. Mubashshir Jamil, Mahbubul Islam, Lisan Al Amin, Rupak Kumar Das, Farzana Akter, Faisal Quader View a PDF of the paper titled Human-Centered Explainable AI for Security Enhancement: A Deep Intrusion Detection Framework, by Md Muntasir Jahid Ayan and 8 other authors View PDF HTML (experimental) Abstract:The increasing complexity and frequency of cyber-threats demand intrusion detection systems (IDS) that are not only accurate but also interpretable. This paper presented a novel IDS framework that integrated Explainable Artificial Intelligence (XAI) to enhance transparency in deep learning models. The framework was evaluated experimentally using the benchmark dataset NSL-KDD, demonstrating superior performance compared to traditional IDS and black-box deep learning models. The proposed approach combined Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) networks for capturing temporal dependencies in traffic sequences. Our deep learning results showed that both CNN and LSTM reached 0.99 for accuracy, whereas LSTM outperformed CNN at macro average precision, recall, and F-1 score. For weighted average precision, recall, and F-1 score, both models scored almost similarly. To ensure inte...