[2601.18678] Counterfactual Explanations on Robust Perceptual Geodesics
About this article
Abstract page for arXiv paper 2601.18678: Counterfactual Explanations on Robust Perceptual Geodesics
Computer Science > Machine Learning arXiv:2601.18678 (cs) [Submitted on 26 Jan 2026 (v1), last revised 1 Mar 2026 (this version, v2)] Title:Counterfactual Explanations on Robust Perceptual Geodesics Authors:Eslam Zaher, Maciej Trzaskowski, Quan Nguyen, Fred Roosta View a PDF of the paper titled Counterfactual Explanations on Robust Perceptual Geodesics, by Eslam Zaher and 3 other authors View PDF HTML (experimental) Abstract:Latent-space optimization methods for counterfactual explanations - framed as minimal semantic perturbations that change model predictions - inherit the ambiguity of Wachter et al.'s objective: the choice of distance metric dictates whether perturbations are meaningful or adversarial. Existing approaches adopt flat or misaligned geometries, leading to off-manifold artifacts, semantic drift, or adversarial collapse. We introduce Perceptual Counterfactual Geodesics (PCG), a method that constructs counterfactuals by tracing geodesics under a perceptually Riemannian metric induced from robust vision features. This geometry aligns with human perception and penalizes brittle directions, enabling smooth, on-manifold, semantically valid transitions. Experiments on three vision datasets show that PCG outperforms baselines and reveals failure modes hidden under standard metrics. Comments: Subjects: Machine Learning (cs.LG); Computer Vision and Pattern Recognition (cs.CV); Human-Computer Interaction (cs.HC); Differential Geometry (math.DG) Cite as: arXiv:2601.186...