[2602.19071] Defining Explainable AI for Requirements Analysis
Summary
This paper defines the requirements for Explainable AI (XAI) in the context of requirements analysis, focusing on the dimensions of Source, Depth, and Scope to enhance trust in AI decision-making.
Why It Matters
As AI systems become integral to various applications, understanding how to effectively explain their decisions is crucial for user trust and acceptance. This paper addresses the need for tailored explanatory requirements based on application contexts, which is vital for advancing XAI practices.
Key Takeaways
- XAI is essential for building trust in AI systems.
- The paper categorizes explanatory requirements into Source, Depth, and Scope.
- Different applications necessitate different explanatory approaches.
- The focus is on matching ML capabilities with application needs.
- Existing literature on XAI is acknowledged but not covered in detail.
Computer Science > Artificial Intelligence arXiv:2602.19071 (cs) [Submitted on 22 Feb 2026] Title:Defining Explainable AI for Requirements Analysis Authors:Raymond Sheh, Isaac Monteath View a PDF of the paper titled Defining Explainable AI for Requirements Analysis, by Raymond Sheh and Isaac Monteath View PDF HTML (experimental) Abstract:Explainable Artificial Intelligence (XAI) has become popular in the last few years. The Artificial Intelligence (AI) community in general, and the Machine Learning (ML) community in particular, is coming to the realisation that in many applications, for AI to be trusted, it must not only demonstrate good performance in its decisionmaking, but it also must explain these decisions and convince us that it is making the decisions for the right reasons. However, different applications have different requirements on the information required of the underlying AI system in order to convince us that it is worthy of our trust. How do we define these requirements? In this paper, we present three dimensions for categorising the explanatory requirements of different applications. These are Source, Depth and Scope. We focus on the problem of matching up the explanatory requirements of different applications with the capabilities of underlying ML techniques to provide them. We deliberately avoid including aspects of explanation that are already well-covered by the existing literature and we focus our discussion on ML although the principles apply to AI m...