[2603.01630] SEED-SET: Scalable Evolving Experimental Design for System-level Ethical Testing
About this article
Abstract page for arXiv paper 2603.01630: SEED-SET: Scalable Evolving Experimental Design for System-level Ethical Testing
Computer Science > Artificial Intelligence arXiv:2603.01630 (cs) [Submitted on 2 Mar 2026] Title:SEED-SET: Scalable Evolving Experimental Design for System-level Ethical Testing Authors:Anjali Parashar, Yingke Li, Eric Yang Yu, Fei Chen, James Neidhoefer, Devesh Upadhyay, Chuchu Fan View a PDF of the paper titled SEED-SET: Scalable Evolving Experimental Design for System-level Ethical Testing, by Anjali Parashar and 6 other authors View PDF HTML (experimental) Abstract:As autonomous systems such as drones, become increasingly deployed in high-stakes, human-centric domains, it is critical to evaluate the ethical alignment since failure to do so imposes imminent danger to human lives, and long term bias in decision-making. Automated ethical benchmarking of these systems is understudied due to the lack of ubiquitous, well-defined metrics for evaluation, and stakeholder-specific subjectivity, which cannot be modeled analytically. To address these challenges, we propose SEED-SET, a Bayesian experimental design framework that incorporates domain-specific objective evaluations, and subjective value judgments from stakeholders. SEED-SET models both evaluation types separately with hierarchical Gaussian Processes, and uses a novel acquisition strategy to propose interesting test candidates based on learnt qualitative preferences and objectives that align with the stakeholder preferences. We validate our approach for ethical benchmarking of autonomous agents on two applications and ...