[2602.21746] fEDM+: A Risk-Based Fuzzy Ethical Decision Making Framework with Principle-Level Explainability and Pluralistic Validation

[2602.21746] fEDM+: A Risk-Based Fuzzy Ethical Decision Making Framework with Principle-Level Explainability and Pluralistic Validation

arXiv - AI 4 min read Article

Summary

The paper presents fEDM+, an enhanced fuzzy ethical decision-making framework that improves explainability and validation by integrating multiple stakeholder perspectives.

Why It Matters

As AI systems increasingly influence ethical decisions, frameworks like fEDM+ are crucial for ensuring transparency and accountability. This model addresses the need for principled explainability and robustness in ethical AI, making it relevant for developers and policymakers in AI ethics.

Key Takeaways

  • fEDM+ enhances the original fuzzy Ethical Decision-Making framework by adding an Explainability and Traceability Module.
  • The framework allows for principled disagreement by validating decisions against multiple stakeholder referents.
  • It maintains formal verifiability while improving interpretability and contextual sensitivity in ethical AI systems.

Computer Science > Artificial Intelligence arXiv:2602.21746 (cs) [Submitted on 25 Feb 2026] Title:fEDM+: A Risk-Based Fuzzy Ethical Decision Making Framework with Principle-Level Explainability and Pluralistic Validation Authors:Abeer Dyoub, Francesca A. Lisi View a PDF of the paper titled fEDM+: A Risk-Based Fuzzy Ethical Decision Making Framework with Principle-Level Explainability and Pluralistic Validation, by Abeer Dyoub and 1 other authors View PDF HTML (experimental) Abstract:In a previous work, we introduced the fuzzy Ethical Decision-Making framework (fEDM), a risk-based ethical reasoning architecture grounded in fuzzy logic. The original model combined a fuzzy Ethical Risk Assessment module (fERA) with ethical decision rules, enabled formal structural verification through Fuzzy Petri Nets (FPNs), and validated outputs against a single normative referent. Although this approach ensured formal soundness and decision consistency, it did not fully address two critical challenges: principled explainability of decisions and robustness under ethical pluralism. In this paper, we extend fEDM in two major directions. First, we introduce an Explainability and Traceability Module (ETM) that explicitly links each ethical decision rule to the underlying moral principles and computes a weighted principle-contribution profile for every recommended action. This enables transparent, auditable explanations that expose not only what decision was made but why, and on the basis of whi...

Related Articles

UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Improving AI models’ ability to explain their predictions
Machine Learning

Improving AI models’ ability to explain their predictions

AI News - General · 9 min ·
Llms

LLM agents can trigger real actions now. But what actually stops them from executing?

We ran into a simple but important issue while building agents with tool calling: the model can propose actions but nothing actually enfo...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

OkCupid gave 3 million dating-app photos to facial recognition firm, FTC says

submitted by /u/Mathemodel [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime