[2603.27792] What-If Explanations Over Time: Counterfactuals for Time Series Classification
About this article
Abstract page for arXiv paper 2603.27792: What-If Explanations Over Time: Counterfactuals for Time Series Classification
Computer Science > Machine Learning arXiv:2603.27792 (cs) [Submitted on 29 Mar 2026] Title:What-If Explanations Over Time: Counterfactuals for Time Series Classification Authors:Udo Schlegel, Thomas Seidl View a PDF of the paper titled What-If Explanations Over Time: Counterfactuals for Time Series Classification, by Udo Schlegel and 1 other authors View PDF HTML (experimental) Abstract:Counterfactual explanations emerge as a powerful approach in explainable AI, providing what-if scenarios that reveal how minimal changes to an input time series can alter the model's prediction. This work presents a survey of recent algorithms for counterfactual explanations for time series classification. We review state-of-the-art methods, spanning instance-based nearest-neighbor techniques, pattern-driven algorithms, gradient-based optimization, and generative models. For each, we discuss the underlying methodology, the models and classifiers they target, and the datasets on which they are evaluated. We highlight unique challenges in generating counterfactuals for temporal data, such as maintaining temporal coherence, plausibility, and actionable interpretability, which distinguish the temporal from tabular or image domains. We analyze the strengths and limitations of existing approaches and compare their effectiveness along key dimensions (validity, proximity, sparsity, plausibility, etc.). In addition, we implemented an open-source implementation library, Counterfactual Explanations fo...