[2604.03843] Explainability-Guided Adversarial Attacks on Transformer-Based Malware Detectors Using Control Flow Graphs

[2604.03843] Explainability-Guided Adversarial Attacks on Transformer-Based Malware Detectors Using Control Flow Graphs

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2604.03843: Explainability-Guided Adversarial Attacks on Transformer-Based Malware Detectors Using Control Flow Graphs

Computer Science > Cryptography and Security arXiv:2604.03843 (cs) [Submitted on 4 Apr 2026] Title:Explainability-Guided Adversarial Attacks on Transformer-Based Malware Detectors Using Control Flow Graphs Authors:Andrew Wheeler, Kshitiz Aryal, Maanak Gupta View a PDF of the paper titled Explainability-Guided Adversarial Attacks on Transformer-Based Malware Detectors Using Control Flow Graphs, by Andrew Wheeler and 2 other authors View PDF HTML (experimental) Abstract:Transformer-based malware detection systems operating on graph modalities such as control flow graphs (CFGs) achieve strong performance by modeling structural relationships in program behavior. However, their robustness to adversarial evasion attacks remains underexplored. This paper examines the vulnerability of a RoBERTa-based malware detector that linearizes CFGs into sequences of function calls, a design choice that enables transformer modeling but may introduce token-level sensitivities and ordering artifacts exploitable by adversaries. By evaluating evasion strategies within this graph-to-sequence framework, we provide insight into the practical robustness of transformer-based malware detectors beyond aggregate detection accuracy. This paper proposes a white-box adversarial evasion attack that leverages explainability mechanisms to identify and perturb most influential graph components. Using token- and word-level attributions derived from integrated gradients, the attack iteratively replaces positively...

Originally published on April 07, 2026. Curated by AI News.

Related Articles

[2603.12365] Optimal Experimental Design for Reliable Learning of History-Dependent Constitutive Laws
Machine Learning

[2603.12365] Optimal Experimental Design for Reliable Learning of History-Dependent Constitutive Laws

Abstract page for arXiv paper 2603.12365: Optimal Experimental Design for Reliable Learning of History-Dependent Constitutive Laws

arXiv - Machine Learning · 4 min ·
[2603.17573] HeiSD: Hybrid Speculative Decoding for Embodied Vision-Language-Action Models with Kinematic Awareness
Machine Learning

[2603.17573] HeiSD: Hybrid Speculative Decoding for Embodied Vision-Language-Action Models with Kinematic Awareness

Abstract page for arXiv paper 2603.17573: HeiSD: Hybrid Speculative Decoding for Embodied Vision-Language-Action Models with Kinematic Aw...

arXiv - Machine Learning · 4 min ·
[2512.20562] Shallow Neural Networks Learn Low-Degree Spherical Polynomials with Feature Learning by Learnable Channel Attention
Machine Learning

[2512.20562] Shallow Neural Networks Learn Low-Degree Spherical Polynomials with Feature Learning by Learnable Channel Attention

Abstract page for arXiv paper 2512.20562: Shallow Neural Networks Learn Low-Degree Spherical Polynomials with Feature Learning by Learnab...

arXiv - Machine Learning · 4 min ·
[2603.07475] A Comparative analysis of Layer-wise Representational Capacity in AR and Diffusion LLMs
Llms

[2603.07475] A Comparative analysis of Layer-wise Representational Capacity in AR and Diffusion LLMs

Abstract page for arXiv paper 2603.07475: A Comparative analysis of Layer-wise Representational Capacity in AR and Diffusion LLMs

arXiv - Machine Learning · 3 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime