[2603.05399] Judge Reliability Harness: Stress Testing the Reliability of LLM Judges
About this article
Abstract page for arXiv paper 2603.05399: Judge Reliability Harness: Stress Testing the Reliability of LLM Judges
Computer Science > Artificial Intelligence arXiv:2603.05399 (cs) [Submitted on 5 Mar 2026] Title:Judge Reliability Harness: Stress Testing the Reliability of LLM Judges Authors:Sunishchal Dev, Andrew Sloan, Joshua Kavner, Nicholas Kong, Morgan Sandler View a PDF of the paper titled Judge Reliability Harness: Stress Testing the Reliability of LLM Judges, by Sunishchal Dev and 4 other authors View PDF HTML (experimental) Abstract:We present the Judge Reliability Harness, an open source library for constructing validation suites that test the reliability of LLM judges. As LLM based scoring is widely deployed in AI benchmarks, more tooling is needed to efficiently assess the reliability of these methods. Given a benchmark dataset and an LLM judge configuration, the harness generates reliability tests that evaluate both binary judgment accuracy and ordinal grading performance for free-response and agentic task formats. We evaluate four state-of-the-art judges across four benchmarks spanning safety, persuasion, misuse, and agentic behavior, and find meaningful variation in performance across models and perturbation types, highlighting opportunities to improve the robustness of LLM judges. No judge that we evaluated is uniformly reliable across benchmarks using our harness. For example, our preliminary experiments on judges revealed consistency issues as measured by accuracy in judging another LLM's ability to complete a task due to simple text formatting changes, paraphrasing, c...