[2505.20085] Explanation User Interfaces: A Systematic Literature Review

[2505.20085] Explanation User Interfaces: A Systematic Literature Review

arXiv - AI 4 min read Article

Summary

This systematic literature review explores Explanation User Interfaces (XUIs) in AI, emphasizing the importance of effective user explanations for AI systems and presenting design guidelines and a supporting platform for practitioners.

Why It Matters

As AI technology becomes more prevalent, understanding how to present explanations to users is crucial for transparency and trust. This review highlights existing solutions and guidelines, addressing a significant gap in the design of user interfaces that facilitate user comprehension of AI decisions.

Key Takeaways

  • XUIs are essential for making AI systems transparent and trustworthy.
  • The paper provides a comprehensive review of current academic literature on XUIs.
  • Design guidelines are proposed to enhance the effectiveness of user explanations.
  • A platform named HERMES is introduced to aid in the development of explainable user interfaces.
  • The research addresses a critical need for user-centered design in AI applications.

Computer Science > Human-Computer Interaction arXiv:2505.20085 (cs) [Submitted on 26 May 2025 (v1), last revised 19 Feb 2026 (this version, v2)] Title:Explanation User Interfaces: A Systematic Literature Review Authors:Eleonora Cappuccio (1, 2, 3), Andrea Esposito (2), Francesco Greco (2), Giuseppe Desolda (2), Rosa Lanzilotti (2), Salvatore Rinzivillo (3) ((1) Department of Computer Science, University of Pisa, (2) Department of Computer Science, University of Bari Aldo Moro, (3) ISTI CNR) View a PDF of the paper titled Explanation User Interfaces: A Systematic Literature Review, by Eleonora Cappuccio (1 and 11 other authors View PDF HTML (experimental) Abstract:Artificial Intelligence (AI) is one of the major technological advancements of this century, bearing incredible potential for users through AI-powered applications and tools in numerous domains. Being often black-box (i.e., its decision-making process is unintelligible), developers typically resort to eXplainable Artificial Intelligence (XAI) techniques to interpret the behaviour of AI models to produce systems that are transparent, fair, reliable, and trustworthy. However, presenting explanations to the user is not trivial and is often left as a secondary aspect of the system's design process, leading to AI systems that are not useful to end-users. This paper presents a Systematic Literature Review on Explanation User Interfaces (XUIs) to gain a deeper understanding of the solutions and design guidelines employed...

Related Articles

UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Improving AI models’ ability to explain their predictions
Machine Learning

Improving AI models’ ability to explain their predictions

AI News - General · 9 min ·
Machine Learning

[P] SpeakFlow - AI Dialogue Practice Coach with GLM 5.1

Built SpeakFlow for the Z.AI Builder Series hackathon. AI dialogue practice coach that evaluates your spoken responses in real-time. Two ...

Reddit - Machine Learning · 1 min ·
Machine Learning

[R] ICML Anonymized git repos for rebuttal

A number of the papers I'm reviewing for have submitted additional figures and code through anonymized git repos (e.g. https://anonymous....

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime