[2602.17990] WorkflowPerturb: Calibrated Stress Tests for Evaluating Multi-Agent Workflow Metrics
Summary
The paper introduces WorkflowPerturb, a benchmark for evaluating multi-agent workflow metrics through calibrated stress tests, addressing challenges in automatic evaluation of structured workflows.
Why It Matters
As LLM-based systems increasingly automate complex tasks, ensuring the reliability of workflow evaluations becomes crucial. WorkflowPerturb provides a systematic approach to assess and improve the calibration of evaluation metrics, enhancing the robustness of AI applications in multi-agent environments.
Key Takeaways
- WorkflowPerturb offers a controlled benchmark for workflow evaluation metrics.
- It includes 4,973 golden workflows and 44,757 perturbed variants to test metric sensitivity.
- The study identifies systematic differences across various metric families.
- Results support a severity-aware interpretation of workflow evaluation scores.
- The dataset will be publicly available upon acceptance, promoting further research.
Computer Science > Artificial Intelligence arXiv:2602.17990 (cs) [Submitted on 20 Feb 2026] Title:WorkflowPerturb: Calibrated Stress Tests for Evaluating Multi-Agent Workflow Metrics Authors:Madhav Kanda, Pedro Las-Casas, Alok Gautam Kumbhare, Rodrigo Fonseca, Sharad Agarwal View a PDF of the paper titled WorkflowPerturb: Calibrated Stress Tests for Evaluating Multi-Agent Workflow Metrics, by Madhav Kanda and 4 other authors View PDF HTML (experimental) Abstract:LLM-based systems increasingly generate structured workflows for complex tasks. In practice, automatic evaluation of these workflows is difficult, because metric scores are often not calibrated, and score changes do not directly communicate the severity of workflow degradation. We introduce WorkflowPerturb, a controlled benchmark for studying workflow evaluation metrics. It works by applying realistic, controlled perturbations to golden workflows. WorkflowPerturb contains 4,973 golden workflows and 44,757 perturbed variants across three perturbation types (Missing Steps, Compressed Steps, and Description Changes), each applied at severity levels of 10%, 30%, and 50%. We benchmark multiple metric families and analyze their sensitivity and calibration using expected score trajectories and residuals. Our results characterize systematic differences across metric families and support severity-aware interpretation of workflow evaluation scores. Our dataset will be released upon acceptance. Subjects: Artificial Intelligen...