[2602.12748] X-SYS: A Reference Architecture for Interactive Explanation Systems
Summary
The article presents X-SYS, a reference architecture designed for interactive explanation systems in AI, addressing the challenges of deploying explainability effectively.
Why It Matters
As AI systems become more complex, the need for explainability is crucial for user trust and system usability. X-SYS provides a structured approach to integrate interactive explanations, making it easier for developers and researchers to create user-friendly AI systems that meet operational constraints.
Key Takeaways
- X-SYS organizes interactive explanation systems around four key quality attributes: scalability, traceability, responsiveness, and adaptability.
- The architecture decomposes into five components, facilitating the connection between user interfaces and system capabilities.
- SemanticLens serves as a practical implementation of X-SYS, demonstrating its effectiveness in managing user interactions and backend processes.
- The framework supports independent evolution of user interfaces and backend computations, enhancing system flexibility.
- This work provides a reusable blueprint for researchers and practitioners in the field of explainable AI.
Computer Science > Artificial Intelligence arXiv:2602.12748 (cs) [Submitted on 13 Feb 2026] Title:X-SYS: A Reference Architecture for Interactive Explanation Systems Authors:Tobias Labarta, Nhi Hoang, Maximilian Dreyer, Jim Berend, Oleg Hein, Jackie Ma, Wojciech Samek, Sebastian Lapuschkin View a PDF of the paper titled X-SYS: A Reference Architecture for Interactive Explanation Systems, by Tobias Labarta and 7 other authors View PDF HTML (experimental) Abstract:The explainable AI (XAI) research community has proposed numerous technical methods, yet deploying explainability as systems remains challenging: Interactive explanation systems require both suitable algorithms and system capabilities that maintain explanation usability across repeated queries, evolving models and data, and governance constraints. We argue that operationalizing XAI requires treating explainability as an information systems problem where user interaction demands induce specific system requirements. We introduce X-SYS, a reference architecture for interactive explanation systems, that guides (X)AI researchers, developers and practitioners in connecting interactive explanation user interfaces (XUI) with system capabilities. X-SYS organizes around four quality attributes named STAR (scalability, traceability, responsiveness, and adaptability), and specifies a five-component decomposition (XUI Services, Explanation Services, Model Services, Data Services, Orchestration and Governance). It maps interacti...