[2506.05937] Robust Adversarial Quantification via Conflict-Aware Evidential Deep Learning
About this article
Abstract page for arXiv paper 2506.05937: Robust Adversarial Quantification via Conflict-Aware Evidential Deep Learning
Computer Science > Machine Learning arXiv:2506.05937 (cs) [Submitted on 6 Jun 2025 (v1), last revised 4 Mar 2026 (this version, v2)] Title:Robust Adversarial Quantification via Conflict-Aware Evidential Deep Learning Authors:Charmaine Barker, Daniel Bethell, Simos Gerasimou View a PDF of the paper titled Robust Adversarial Quantification via Conflict-Aware Evidential Deep Learning, by Charmaine Barker and 2 other authors View PDF HTML (experimental) Abstract:Reliability of deep learning models is critical for deployment in high-stakes applications, where out-of-distribution or adversarial inputs may lead to detrimental outcomes. Evidential Deep Learning, an efficient paradigm for uncertainty quantification, models predictions as Dirichlet distributions of a single forward pass. However, EDL is particularly vulnerable to adversarially perturbed inputs, making overconfident errors. Conflict-aware Evidential Deep Learning (C-EDL) is a lightweight post-hoc uncertainty quantification approach that mitigates these issues, enhancing adversarial and OOD robustness without retraining. C-EDL generates diverse, task-preserving transformations per input and quantifies representational disagreement to calibrate uncertainty estimates when needed. C-EDL's conflict-aware prediction adjustment improves detection of OOD and adversarial inputs, maintaining high in-distribution accuracy and low computational overhead. Our experimental evaluation shows that C-EDL significantly outperforms stat...