[2602.19071] Defining Explainable AI for Requirements Analysis

[2602.19071] Defining Explainable AI for Requirements Analysis

arXiv - AI 3 min read Article

Summary

This paper defines the requirements for Explainable AI (XAI) in the context of requirements analysis, focusing on the dimensions of Source, Depth, and Scope to enhance trust in AI decision-making.

Why It Matters

As AI systems become integral to various applications, understanding how to effectively explain their decisions is crucial for user trust and acceptance. This paper addresses the need for tailored explanatory requirements based on application contexts, which is vital for advancing XAI practices.

Key Takeaways

  • XAI is essential for building trust in AI systems.
  • The paper categorizes explanatory requirements into Source, Depth, and Scope.
  • Different applications necessitate different explanatory approaches.
  • The focus is on matching ML capabilities with application needs.
  • Existing literature on XAI is acknowledged but not covered in detail.

Computer Science > Artificial Intelligence arXiv:2602.19071 (cs) [Submitted on 22 Feb 2026] Title:Defining Explainable AI for Requirements Analysis Authors:Raymond Sheh, Isaac Monteath View a PDF of the paper titled Defining Explainable AI for Requirements Analysis, by Raymond Sheh and Isaac Monteath View PDF HTML (experimental) Abstract:Explainable Artificial Intelligence (XAI) has become popular in the last few years. The Artificial Intelligence (AI) community in general, and the Machine Learning (ML) community in particular, is coming to the realisation that in many applications, for AI to be trusted, it must not only demonstrate good performance in its decisionmaking, but it also must explain these decisions and convince us that it is making the decisions for the right reasons. However, different applications have different requirements on the information required of the underlying AI system in order to convince us that it is worthy of our trust. How do we define these requirements? In this paper, we present three dimensions for categorising the explanatory requirements of different applications. These are Source, Depth and Scope. We focus on the problem of matching up the explanatory requirements of different applications with the capabilities of underlying ML techniques to provide them. We deliberately avoid including aspects of explanation that are already well-covered by the existing literature and we focus our discussion on ML although the principles apply to AI m...

Related Articles

Machine Learning

[P] MCGrad: fix calibration of your ML model in subgroups

Hi r/MachineLearning, We’re open-sourcing MCGrad, a Python package for multicalibration–developed and deployed in production at Meta. Thi...

Reddit - Machine Learning · 1 min ·
Machine Learning

Ml project user give dataset and I give best model [D] [P]

Tl,dr : suggest me a solution to create a ai ml project where user will give his dataset as input and the project should give best model ...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] ICML Reviewer Acknowledgement

Hi, I'm a little confused about ICML discussion period Does the period for reviewer acknowledging responses have already ended? One of th...

Reddit - Machine Learning · 1 min ·
Llms

Claude Opus 4.6 API at 40% below Anthropic pricing – try free before you pay anything

Hey everyone I've set up a self-hosted API gateway using [New-API](QuantumNous/new-ap) to manage and distribute Claude Opus 4.6 access ac...

Reddit - Artificial Intelligence · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime