[2602.16341] Explainability for Fault Detection System in Chemical Processes

[2602.16341] Explainability for Fault Detection System in Chemical Processes

arXiv - Machine Learning 3 min read Article

Summary

This article evaluates two explainability methods, Integrated Gradients and SHAP, for fault detection in chemical processes using an LSTM classifier.

Why It Matters

Understanding how AI systems make decisions is crucial, especially in safety-critical domains like chemical processes. This research highlights the effectiveness of explainability techniques in identifying faults, which can enhance operational safety and reliability.

Key Takeaways

  • The study compares Integrated Gradients and SHAP for fault detection.
  • Both methods effectively identify subsystems where faults occur.
  • SHAP may provide more informative insights into root causes compared to Integrated Gradients.
  • The approach is model-agnostic, applicable to various similar processes.
  • Explainability in AI can significantly improve safety in chemical processes.

Computer Science > Machine Learning arXiv:2602.16341 (cs) [Submitted on 18 Feb 2026] Title:Explainability for Fault Detection System in Chemical Processes Authors:Georgios Gravanis, Dimitrios Kyriakou, Spyros Voutetakis, Simira Papadopoulou, Konstantinos Diamantaras View a PDF of the paper titled Explainability for Fault Detection System in Chemical Processes, by Georgios Gravanis and 4 other authors View PDF HTML (experimental) Abstract:In this work, we apply and compare two state-of-the-art eXplainability Artificial Intelligence (XAI) methods, the Integrated Gradients (IG) and the SHapley Additive exPlanations (SHAP), that explain the fault diagnosis decisions of a highly accurate Long Short-Time Memory (LSTM) classifier. The classifier is trained to detect faults in a benchmark non-linear chemical process, the Tennessee Eastman Process (TEP). It is highlighted how XAI methods can help identify the subsystem of the process where the fault occurred. Using our knowledge of the process, we note that in most cases the same features are indicated as the most important for the decision, while insome cases the SHAP method seems to be more informative and closer to the root cause of the fault. Finally, since the used XAI methods are model-agnostic, the proposed approach is not limited to the specific process and can also be used in similar problems. Subjects: Machine Learning (cs.LG) Cite as: arXiv:2602.16341 [cs.LG]   (or arXiv:2602.16341v1 [cs.LG] for this version)   https://d...

Related Articles

Google quietly releases an offline-first AI dictation app on iOS | TechCrunch
Machine Learning

Google quietly releases an offline-first AI dictation app on iOS | TechCrunch

Google's new offline-first dictation app uses Gemma AI models to take on the apps like Wispr Flow.

TechCrunch - AI · 4 min ·
Machine Learning

How well do you understand how AI/deep learning works?

Specifically, how AI are programmed, trained, and how they perform their functions. I’ll be asking this in different subs to see if/how t...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

a fun survey to look at how consumers perceive the use of AI in fashion brand marketing. (all ages, all genders)

Hi r/artificial ! I'm posting on behalf of a friend who is conducting academic research for their dissertation. The survey looks at how c...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

I Built a Functional Cognitive Engine

Aura: https://github.com/youngbryan97/aura Aura is not a chatbot with personality prompts. It is a complete cognitive architecture — 60+ ...

Reddit - Artificial Intelligence · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime