[2602.19087] Detecting Cybersecurity Threats by Integrating Explainable AI with SHAP Interpretability and Strategic Data Sampling

[2602.19087] Detecting Cybersecurity Threats by Integrating Explainable AI with SHAP Interpretability and Strategic Data Sampling

arXiv - Machine Learning 3 min read Article

Summary

This article presents a novel framework for detecting cybersecurity threats by integrating Explainable AI (XAI) with SHAP interpretability and strategic data sampling, enhancing transparency and efficiency in threat detection.

Why It Matters

As cybersecurity threats evolve, the need for transparent and reliable AI systems becomes critical. This research addresses key challenges in AI deployment for security, ensuring that models are interpretable, efficient, and rigorously tested, thus fostering trust in automated systems used by security analysts.

Key Takeaways

  • Integrates Explainable AI with SHAP for better interpretability in cybersecurity.
  • Utilizes strategic data sampling to maintain class distributions while enhancing model efficiency.
  • Implements automated data leakage prevention to ensure experimental integrity.
  • Demonstrates that explainability and computational efficiency can coexist in AI systems.
  • Provides actionable insights for security analysts, improving decision-making processes.

Computer Science > Cryptography and Security arXiv:2602.19087 (cs) [Submitted on 22 Feb 2026] Title:Detecting Cybersecurity Threats by Integrating Explainable AI with SHAP Interpretability and Strategic Data Sampling Authors:Norrakith Srisumrith, Sunantha Sodsee View a PDF of the paper titled Detecting Cybersecurity Threats by Integrating Explainable AI with SHAP Interpretability and Strategic Data Sampling, by Norrakith Srisumrith and 1 other authors View PDF HTML (experimental) Abstract:The critical need for transparent and trustworthy machine learning in cybersecurity operations drives the development of this integrated Explainable AI (XAI) framework. Our methodology addresses three fundamental challenges in deploying AI for threat detection: handling massive datasets through Strategic Sampling Methodology that preserves class distributions while enabling efficient model development; ensuring experimental rigor via Automated Data Leakage Prevention that systematically identifies and removes contaminated features; and providing operational transparency through Integrated XAI Implementation using SHAP analysis for model-agnostic interpretability across algorithms. Applied to the CIC-IDS2017 dataset, our approach maintains detection efficacy while reducing computational overhead and delivering actionable explanations for security analysts. The framework demonstrates that explainability, computational efficiency, and experimental integrity can be simultaneously achieved, pr...

Related Articles

UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Machine Learning

[D] Budget Machine Learning Hardware

Looking to get into machine learning and found this video on a piece of hardware for less than £500. Is it really possible to teach auton...

Reddit - Machine Learning · 1 min ·
Machine Learning

Your prompts aren’t the problem — something else is

I keep seeing people focus heavily on prompt optimization. But in practice, a lot of failures I’ve observed don’t come from the prompt it...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

[R], 31 MILLIONS High frequency data, Light GBM worked perfectly

We just published a paper on predicting adverse selection in high-frequency crypto markets using LightGBM, and I wanted to share it here ...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime