[2602.16341] Explainability for Fault Detection System in Chemical Processes
Summary
This article evaluates two explainability methods, Integrated Gradients and SHAP, for fault detection in chemical processes using an LSTM classifier.
Why It Matters
Understanding how AI systems make decisions is crucial, especially in safety-critical domains like chemical processes. This research highlights the effectiveness of explainability techniques in identifying faults, which can enhance operational safety and reliability.
Key Takeaways
- The study compares Integrated Gradients and SHAP for fault detection.
- Both methods effectively identify subsystems where faults occur.
- SHAP may provide more informative insights into root causes compared to Integrated Gradients.
- The approach is model-agnostic, applicable to various similar processes.
- Explainability in AI can significantly improve safety in chemical processes.
Computer Science > Machine Learning arXiv:2602.16341 (cs) [Submitted on 18 Feb 2026] Title:Explainability for Fault Detection System in Chemical Processes Authors:Georgios Gravanis, Dimitrios Kyriakou, Spyros Voutetakis, Simira Papadopoulou, Konstantinos Diamantaras View a PDF of the paper titled Explainability for Fault Detection System in Chemical Processes, by Georgios Gravanis and 4 other authors View PDF HTML (experimental) Abstract:In this work, we apply and compare two state-of-the-art eXplainability Artificial Intelligence (XAI) methods, the Integrated Gradients (IG) and the SHapley Additive exPlanations (SHAP), that explain the fault diagnosis decisions of a highly accurate Long Short-Time Memory (LSTM) classifier. The classifier is trained to detect faults in a benchmark non-linear chemical process, the Tennessee Eastman Process (TEP). It is highlighted how XAI methods can help identify the subsystem of the process where the fault occurred. Using our knowledge of the process, we note that in most cases the same features are indicated as the most important for the decision, while insome cases the SHAP method seems to be more informative and closer to the root cause of the fault. Finally, since the used XAI methods are model-agnostic, the proposed approach is not limited to the specific process and can also be used in similar problems. Subjects: Machine Learning (cs.LG) Cite as: arXiv:2602.16341 [cs.LG] (or arXiv:2602.16341v1 [cs.LG] for this version) https://d...