[2602.19620] Rules or Weights? Comparing User Understanding of Explainable AI Techniques with the Cognitive XAI-Adaptive Model

[2602.19620] Rules or Weights? Comparing User Understanding of Explainable AI Techniques with the Cognitive XAI-Adaptive Model

arXiv - AI 4 min read Article

Summary

This article explores user understanding of explainable AI (XAI) techniques, comparing rules and weights through the Cognitive XAI-Adaptive Model (CoXAM) to enhance interpretability and decision-making alignment.

Why It Matters

As AI systems become more prevalent, understanding how users interpret AI decisions is crucial for trust and usability. This research provides a cognitive framework to evaluate XAI techniques, potentially improving user interactions and decision-making processes in AI applications.

Key Takeaways

  • CoXAM offers a cognitive framework for comparing XAI techniques.
  • User studies reveal distinct reasoning strategies for interpreting AI decisions.
  • Counterfactual tasks are generally more challenging than forward tasks.
  • Decision tree rules are harder to recall than linear weights.
  • The effectiveness of XAI is context-dependent, influenced by application data.

Computer Science > Artificial Intelligence arXiv:2602.19620 (cs) [Submitted on 23 Feb 2026] Title:Rules or Weights? Comparing User Understanding of Explainable AI Techniques with the Cognitive XAI-Adaptive Model Authors:Louth Bin Rawshan, Zhuoyu Wang, Brian Y Lim View a PDF of the paper titled Rules or Weights? Comparing User Understanding of Explainable AI Techniques with the Cognitive XAI-Adaptive Model, by Louth Bin Rawshan and 2 other authors View PDF HTML (experimental) Abstract:Rules and Weights are popular XAI techniques for explaining AI decisions. Yet, it remains unclear how to choose between them, lacking a cognitive framework to compare their interpretability. In an elicitation user study on forward and counterfactual decision tasks, we identified 7 reasoning strategies of interpreting three XAI Schemas - weights, rules, and their hybrid. To analyze their capabilities, we propose CoXAM, a Cognitive XAI-Adaptive Model with shared memory representation to encode instance attributes, linear weights, and decision rules. CoXAM employs computational rationality to choose among reasoning processes based on the trade-off in utility and reasoning time, separately for forward or counterfactual decision tasks. In a validation study, CoXAM demonstrated a stronger alignment with human decision-making compared to baseline machine learning proxy models. The model successfully replicated and explained several key empirical findings, including that counterfactual tasks are inher...

Related Articles

Machine Learning

Ml project user give dataset and I give best model [D] [P]

Tl,dr : suggest me a solution to create a ai ml project where user will give his dataset as input and the project should give best model ...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] ICML Reviewer Acknowledgement

Hi, I'm a little confused about ICML discussion period Does the period for reviewer acknowledging responses have already ended? One of th...

Reddit - Machine Learning · 1 min ·
Llms

Claude Opus 4.6 API at 40% below Anthropic pricing – try free before you pay anything

Hey everyone I've set up a self-hosted API gateway using [New-API](QuantumNous/new-ap) to manage and distribute Claude Opus 4.6 access ac...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

[D] ICML reviewer making up false claim in acknowledgement, what to do?

In a rebuttal acknowledgement we received, the reviewer made up a claim that our method performs worse than baselines with some hyperpara...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime