[2506.13259] An Explainable and Interpretable Composite Indicator Based on Decision Rules
About this article
Abstract page for arXiv paper 2506.13259: An Explainable and Interpretable Composite Indicator Based on Decision Rules
Computer Science > Machine Learning arXiv:2506.13259 (cs) [Submitted on 16 Jun 2025 (v1), last revised 3 Mar 2026 (this version, v2)] Title:An Explainable and Interpretable Composite Indicator Based on Decision Rules Authors:Salvatore Corrente, Salvatore Greco, Roman Słowiński, Silvano Zappalà View a PDF of the paper titled An Explainable and Interpretable Composite Indicator Based on Decision Rules, by Salvatore Corrente and 3 other authors View PDF Abstract:Composite indicators are widely used to score or classify units evaluated on multiple criteria. Their construction typically involves aggregating criteria evaluations, a common practice in Multiple Criteria Decision Aiding (MCDA). Beyond producing a final score or classification, however, ensuring explainability, interpretability, and transparency is crucial. This paper proposes a novel framework for constructing explainable and interpretable composite indicators using if-then decision rules. We explore four scenarios: (i) decision rules explaining classifications derived from the sum of ordinal indicator codes; (ii) interpretation of an opaque numerical composite indicator used to classify units into quantiles; (iii) construction of a composite indicator from decision-maker preference information, given as classifications of reference units; and (iv) explanation of classifications generated by an existing MCDA method. To induce the rules from scored or classified units, we apply the Dominance-based Rough Set Approach...