[2602.13087] EXCODER: EXplainable Classification Of DiscretE time series Representations

[2602.13087] EXCODER: EXplainable Classification Of DiscretE time series Representations

arXiv - Machine Learning 4 min read Article

Summary

The paper explores EXCODER, a method for explainable classification of discrete time series representations, enhancing interpretability while maintaining performance.

Why It Matters

As deep learning models become increasingly complex, the need for explainability in AI is critical. This research addresses the challenge of making time series classification models more interpretable, which is essential for trust and usability in various applications, particularly in fields like finance and healthcare.

Key Takeaways

  • EXCODER enhances explainability in time series classification using discrete latent representations.
  • The method reduces redundancy and focuses on informative patterns, improving model transparency.
  • A new metric, Similar Subsequence Accuracy (SSA), validates the effectiveness of XAI methods.
  • The approach maintains classification performance while providing compact, interpretable explanations.
  • This research contributes to the broader field of Explainable AI, addressing a significant gap in time series analysis.

Computer Science > Machine Learning arXiv:2602.13087 (cs) [Submitted on 13 Feb 2026] Title:EXCODER: EXplainable Classification Of DiscretE time series Representations Authors:Yannik Hahn, Antonin Königsfeld, Hasan Tercan, Tobias Meisen View a PDF of the paper titled EXCODER: EXplainable Classification Of DiscretE time series Representations, by Yannik Hahn and 3 other authors View PDF HTML (experimental) Abstract:Deep learning has significantly improved time series classification, yet the lack of explainability in these models remains a major challenge. While Explainable AI (XAI) techniques aim to make model decisions more transparent, their effectiveness is often hindered by the high dimensionality and noise present in raw time series data. In this work, we investigate whether transforming time series into discrete latent representations-using methods such as Vector Quantized Variational Autoencoders (VQ-VAE) and Discrete Variational Autoencoders (DVAE)-not only preserves but enhances explainability by reducing redundancy and focusing on the most informative patterns. We show that applying XAI methods to these compressed representations leads to concise and structured explanations that maintain faithfulness without sacrificing classification performance. Additionally, we propose Similar Subsequence Accuracy (SSA), a novel metric that quantitatively assesses the alignment between XAI-identified salient subsequences and the label distribution in the training data. SSA provi...

Related Articles

Machine Learning

[P] Fused MoE Dispatch in Pure Triton: Beating CUDA-Optimized Megablocks at Inference Batch Sizes

I built a fused MoE dispatch kernel in pure Triton that handles the full forward pass for Mixture-of-Experts models. No CUDA, no vendor-s...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] ICML Rebuttal Question

I am currently working on my response on the rebuttal acknowledgments for ICML and I doubting how to handle the strawman argument of that...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] ML researcher looking to switch to a product company.

Hey, I am an AI researcher currently working in a deep tech company as a data scientist. Prior to this, I was doing my PhD. My current ro...

Reddit - Machine Learning · 1 min ·
Machine Learning

Building behavioural response models of public figures using Brain scan data (Predict their next move using psychological modelling) [P]

Hey guys, I’m the same creator of Netryx V2, the geolocation tool. I’ve been working on something new called COGNEX. It learns how a pers...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime