[2602.21746] fEDM+: A Risk-Based Fuzzy Ethical Decision Making Framework with Principle-Level Explainability and Pluralistic Validation
Summary
The paper presents fEDM+, an enhanced fuzzy ethical decision-making framework that improves explainability and validation by integrating multiple stakeholder perspectives.
Why It Matters
As AI systems increasingly influence ethical decisions, frameworks like fEDM+ are crucial for ensuring transparency and accountability. This model addresses the need for principled explainability and robustness in ethical AI, making it relevant for developers and policymakers in AI ethics.
Key Takeaways
- fEDM+ enhances the original fuzzy Ethical Decision-Making framework by adding an Explainability and Traceability Module.
- The framework allows for principled disagreement by validating decisions against multiple stakeholder referents.
- It maintains formal verifiability while improving interpretability and contextual sensitivity in ethical AI systems.
Computer Science > Artificial Intelligence arXiv:2602.21746 (cs) [Submitted on 25 Feb 2026] Title:fEDM+: A Risk-Based Fuzzy Ethical Decision Making Framework with Principle-Level Explainability and Pluralistic Validation Authors:Abeer Dyoub, Francesca A. Lisi View a PDF of the paper titled fEDM+: A Risk-Based Fuzzy Ethical Decision Making Framework with Principle-Level Explainability and Pluralistic Validation, by Abeer Dyoub and 1 other authors View PDF HTML (experimental) Abstract:In a previous work, we introduced the fuzzy Ethical Decision-Making framework (fEDM), a risk-based ethical reasoning architecture grounded in fuzzy logic. The original model combined a fuzzy Ethical Risk Assessment module (fERA) with ethical decision rules, enabled formal structural verification through Fuzzy Petri Nets (FPNs), and validated outputs against a single normative referent. Although this approach ensured formal soundness and decision consistency, it did not fully address two critical challenges: principled explainability of decisions and robustness under ethical pluralism. In this paper, we extend fEDM in two major directions. First, we introduce an Explainability and Traceability Module (ETM) that explicitly links each ethical decision rule to the underlying moral principles and computes a weighted principle-contribution profile for every recommended action. This enables transparent, auditable explanations that expose not only what decision was made but why, and on the basis of whi...