[2602.07150] On Randomness in Agentic Evals
About this article
Abstract page for arXiv paper 2602.07150: On Randomness in Agentic Evals
Computer Science > Machine Learning arXiv:2602.07150 (cs) [Submitted on 6 Feb 2026 (v1), last revised 23 Mar 2026 (this version, v2)] Title:On Randomness in Agentic Evals Authors:Bjarni Haukur Bjarnason, André Silva, Martin Monperrus View a PDF of the paper titled On Randomness in Agentic Evals, by Bjarni Haukur Bjarnason and 2 other authors View PDF HTML (experimental) Abstract:Agentic systems are evaluated on benchmarks where agents interact with environments to solve tasks. Most papers report a pass@1 score computed from a single run per task, assuming this gives a reliable performance estimate. We test this assumption by collecting 60,000 agentic trajectories on SWE-Bench-Verified, spanning three models and two scaffolds. We find substantial variance: single-run pass@1 estimates vary by 2.2 to 6.0 percentage points depending on which run is selected, with standard deviations exceeding 1.5 percentage points even at temperature 0. This variance has critical implications: reported improvements of 2--3 percentage points may reflect evaluation noise rather than genuine algorithmic progress. Through token-level analysis, we show that trajectories diverge early, often within the first few percent of tokens, and that these small differences cascade into different solution strategies. To enable reliable evaluation of agentic systems, we recommend three concrete practices: (1) estimate pass@1 from multiple independent runs per task, especially when measuring small improvements, ...