[2602.19620] Rules or Weights? Comparing User Understanding of Explainable AI Techniques with the Cognitive XAI-Adaptive Model
Summary
This article explores user understanding of explainable AI (XAI) techniques, comparing rules and weights through the Cognitive XAI-Adaptive Model (CoXAM) to enhance interpretability and decision-making alignment.
Why It Matters
As AI systems become more prevalent, understanding how users interpret AI decisions is crucial for trust and usability. This research provides a cognitive framework to evaluate XAI techniques, potentially improving user interactions and decision-making processes in AI applications.
Key Takeaways
- CoXAM offers a cognitive framework for comparing XAI techniques.
- User studies reveal distinct reasoning strategies for interpreting AI decisions.
- Counterfactual tasks are generally more challenging than forward tasks.
- Decision tree rules are harder to recall than linear weights.
- The effectiveness of XAI is context-dependent, influenced by application data.
Computer Science > Artificial Intelligence arXiv:2602.19620 (cs) [Submitted on 23 Feb 2026] Title:Rules or Weights? Comparing User Understanding of Explainable AI Techniques with the Cognitive XAI-Adaptive Model Authors:Louth Bin Rawshan, Zhuoyu Wang, Brian Y Lim View a PDF of the paper titled Rules or Weights? Comparing User Understanding of Explainable AI Techniques with the Cognitive XAI-Adaptive Model, by Louth Bin Rawshan and 2 other authors View PDF HTML (experimental) Abstract:Rules and Weights are popular XAI techniques for explaining AI decisions. Yet, it remains unclear how to choose between them, lacking a cognitive framework to compare their interpretability. In an elicitation user study on forward and counterfactual decision tasks, we identified 7 reasoning strategies of interpreting three XAI Schemas - weights, rules, and their hybrid. To analyze their capabilities, we propose CoXAM, a Cognitive XAI-Adaptive Model with shared memory representation to encode instance attributes, linear weights, and decision rules. CoXAM employs computational rationality to choose among reasoning processes based on the trade-off in utility and reasoning time, separately for forward or counterfactual decision tasks. In a validation study, CoXAM demonstrated a stronger alignment with human decision-making compared to baseline machine learning proxy models. The model successfully replicated and explained several key empirical findings, including that counterfactual tasks are inher...