[2510.20052] Endogenous Aggregation of Multiple Data Envelopment Analysis Scores for Large Data Sets

[2510.20052] Endogenous Aggregation of Multiple Data Envelopment Analysis Scores for Large Data Sets

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2510.20052: Endogenous Aggregation of Multiple Data Envelopment Analysis Scores for Large Data Sets

Mathematics > Optimization and Control arXiv:2510.20052 (math) [Submitted on 22 Oct 2025 (v1), last revised 4 Apr 2026 (this version, v2)] Title:Endogenous Aggregation of Multiple Data Envelopment Analysis Scores for Large Data Sets Authors:Hashem Omrani, Raha Imanirad, Adam Diamant, Utkarsh Verma, Amol Verma, Fahad Razak View a PDF of the paper titled Endogenous Aggregation of Multiple Data Envelopment Analysis Scores for Large Data Sets, by Hashem Omrani and 5 other authors View PDF Abstract:We propose an approach for dynamic efficiency evaluation across multiple organizational dimensions using data envelopment analysis (DEA). The method generates both dimension-specific and aggregate efficiency scores, incorporates desirable and undesirable outputs, and is suitable for large-scale problem settings. Two regularized DEA models are introduced: a slack-based measure (SBM) and a linearized version of a nonlinear goal programming model (GP-SBM). While SBM estimates an aggregate efficiency score and then distributes it across dimensions, GP-SBM first estimates dimension-level efficiencies and then derives an aggregate score. Both models utilize a regularization parameter to enhance discriminatory power while also directly integrating both desirable and undesirable outputs. We demonstrate the computational efficiency and validity of our approach on multiple datasets and apply it to a case study of twelve hospitals in Ontario, Canada, evaluating three theoretically grounded dime...

Originally published on April 07, 2026. Curated by AI News.

Related Articles

Llms

How do you test AI agents in production? The unpredictability is overwhelming.[D]

I’ve been in QA for almost a decade. My mental model for quality was always: given input X, assert output Y. Now I’m on a team that’s shi...

Reddit - Machine Learning · 1 min ·
Machine Learning

INT8 quantization gives me better accuracy than FP16 ! [D]

Hi everyone, I’m working on a deep learning model and I noticed something strange. When I compare different precisions: FP32 (baseline) F...

Reddit - Machine Learning · 1 min ·
The Download: DeepSeek’s latest AI breakthrough, and the race to build world models | MIT Technology Review
Machine Learning

The Download: DeepSeek’s latest AI breakthrough, and the race to build world models | MIT Technology Review

China has blocked Meta’s $2 billion acquisition of AI startup Manus.

MIT Technology Review · 6 min ·
Machine Learning

Maths vs machine learning publishing venues [D]

I am a research mathematician that has recently written a (in my opinion) pretty neat paper in theoretical computer science that is proba...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime