[2602.13087] EXCODER: EXplainable Classification Of DiscretE time series Representations
Summary
The paper explores EXCODER, a method for explainable classification of discrete time series representations, enhancing interpretability while maintaining performance.
Why It Matters
As deep learning models become increasingly complex, the need for explainability in AI is critical. This research addresses the challenge of making time series classification models more interpretable, which is essential for trust and usability in various applications, particularly in fields like finance and healthcare.
Key Takeaways
- EXCODER enhances explainability in time series classification using discrete latent representations.
- The method reduces redundancy and focuses on informative patterns, improving model transparency.
- A new metric, Similar Subsequence Accuracy (SSA), validates the effectiveness of XAI methods.
- The approach maintains classification performance while providing compact, interpretable explanations.
- This research contributes to the broader field of Explainable AI, addressing a significant gap in time series analysis.
Computer Science > Machine Learning arXiv:2602.13087 (cs) [Submitted on 13 Feb 2026] Title:EXCODER: EXplainable Classification Of DiscretE time series Representations Authors:Yannik Hahn, Antonin Königsfeld, Hasan Tercan, Tobias Meisen View a PDF of the paper titled EXCODER: EXplainable Classification Of DiscretE time series Representations, by Yannik Hahn and 3 other authors View PDF HTML (experimental) Abstract:Deep learning has significantly improved time series classification, yet the lack of explainability in these models remains a major challenge. While Explainable AI (XAI) techniques aim to make model decisions more transparent, their effectiveness is often hindered by the high dimensionality and noise present in raw time series data. In this work, we investigate whether transforming time series into discrete latent representations-using methods such as Vector Quantized Variational Autoencoders (VQ-VAE) and Discrete Variational Autoencoders (DVAE)-not only preserves but enhances explainability by reducing redundancy and focusing on the most informative patterns. We show that applying XAI methods to these compressed representations leads to concise and structured explanations that maintain faithfulness without sacrificing classification performance. Additionally, we propose Similar Subsequence Accuracy (SSA), a novel metric that quantitatively assesses the alignment between XAI-identified salient subsequences and the label distribution in the training data. SSA provi...