[2602.13271] Human-Centered Explainable AI for Security Enhancement: A Deep Intrusion Detection Framework

[2602.13271] Human-Centered Explainable AI for Security Enhancement: A Deep Intrusion Detection Framework

arXiv - Machine Learning 4 min read Article

Summary

This paper presents a novel intrusion detection framework that integrates Explainable AI (XAI) to enhance the interpretability and performance of cybersecurity systems, demonstrating superior results over traditional models.

Why It Matters

As cyber threats continue to evolve, the need for effective and interpretable intrusion detection systems becomes critical. This research highlights the importance of combining deep learning with explainability, ensuring that security analysts can trust and understand AI decisions, ultimately improving cybersecurity measures.

Key Takeaways

  • The proposed framework integrates CNN and LSTM for effective intrusion detection.
  • Achieved high accuracy (0.99) and superior performance metrics compared to traditional IDS.
  • Incorporated SHAP for model interpretability, aiding security analysts in understanding decisions.
  • Highlighted key features influencing model decisions, enhancing trust in AI systems.
  • Recommended future enhancements for real-time adaptive learning in threat detection.

Computer Science > Artificial Intelligence arXiv:2602.13271 (cs) [Submitted on 4 Feb 2026] Title:Human-Centered Explainable AI for Security Enhancement: A Deep Intrusion Detection Framework Authors:Md Muntasir Jahid Ayan, Md. Shahriar Rashid, Tazzina Afroze Hassan, Hossain Md. Mubashshir Jamil, Mahbubul Islam, Lisan Al Amin, Rupak Kumar Das, Farzana Akter, Faisal Quader View a PDF of the paper titled Human-Centered Explainable AI for Security Enhancement: A Deep Intrusion Detection Framework, by Md Muntasir Jahid Ayan and 8 other authors View PDF HTML (experimental) Abstract:The increasing complexity and frequency of cyber-threats demand intrusion detection systems (IDS) that are not only accurate but also interpretable. This paper presented a novel IDS framework that integrated Explainable Artificial Intelligence (XAI) to enhance transparency in deep learning models. The framework was evaluated experimentally using the benchmark dataset NSL-KDD, demonstrating superior performance compared to traditional IDS and black-box deep learning models. The proposed approach combined Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) networks for capturing temporal dependencies in traffic sequences. Our deep learning results showed that both CNN and LSTM reached 0.99 for accuracy, whereas LSTM outperformed CNN at macro average precision, recall, and F-1 score. For weighted average precision, recall, and F-1 score, both models scored almost similarly. To ensure inte...

Related Articles

Machine Learning

Free tool I built to score dataset quality (LQS) — feedback welcome [D]

We built a Label Quality Score (LQS) system for our dataset marketplace and opened it up as a free standalone tool. Upload a dataset → ge...

Reddit - Machine Learning · 1 min ·
Meta’s New AI Model Gives Mark Zuckerberg a Seat at the Big Kid’s Table | WIRED
Machine Learning

Meta’s New AI Model Gives Mark Zuckerberg a Seat at the Big Kid’s Table | WIRED

Muse Spark is Meta’s first model since its AI reboot, and the benchmarks suggest formidable performance.

Wired - AI · 6 min ·
Machine Learning

Project Glasswing is inherently Cartel Behaviour

If the large companies always get access to the latest models first to "sure up cybersecurity" they will always have a head start on the ...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

ICML 2026 am I cooked? [D]

Hi, I am currently making the jump to ML from theoretical physics. I just got done with the review period, went from 4333 to 4433, but th...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime