[2506.08514] DiffGradCAM: A Universal Class Activation Map Resistant to Adversarial Training
About this article
Abstract page for arXiv paper 2506.08514: DiffGradCAM: A Universal Class Activation Map Resistant to Adversarial Training
Computer Science > Machine Learning arXiv:2506.08514 (cs) [Submitted on 10 Jun 2025 (v1), last revised 2 Apr 2026 (this version, v3)] Title:DiffGradCAM: A Universal Class Activation Map Resistant to Adversarial Training Authors:Jacob Piland, Chris Sweet, Adam Czajka View a PDF of the paper titled DiffGradCAM: A Universal Class Activation Map Resistant to Adversarial Training, by Jacob Piland and 2 other authors View PDF HTML (experimental) Abstract:Class Activation Mapping (CAM) and its gradient-based variants (e.g., GradCAM) have become standard tools for explaining Convolutional Neural Network (CNN) predictions. However, these approaches typically focus on individual logits, while for neural networks using softmax, the class membership probability estimates depend \textit{only} on the \textit{differences} between logits, not on their absolute values. This disconnect leaves standard CAMs vulnerable to adversarial manipulation, such as passive fooling, where a model is trained to produce misleading CAMs without affecting decision performance. We introduce \textbf{Salience-Hoax Activation Maps (SHAMs)}, an \emph{entropy-aware form of passive fooling} that serves as a benchmark for CAM robustness under adversarial conditions. To address the passive fooling vulnerability, we then propose \textbf{DiffGradCAM}, a novel, lightweight, and contrastive approach to class activation mapping that is both non-suceptible to passive fooling, but also matches the output of standard CAM me...