[2603.25780] A Judge Agent Closes the Reliability Gap in AI-Generated Scientific Simulation
About this article
Abstract page for arXiv paper 2603.25780: A Judge Agent Closes the Reliability Gap in AI-Generated Scientific Simulation
Computer Science > Software Engineering arXiv:2603.25780 (cs) [Submitted on 26 Mar 2026] Title:A Judge Agent Closes the Reliability Gap in AI-Generated Scientific Simulation Authors:Chengshuai Yang View a PDF of the paper titled A Judge Agent Closes the Reliability Gap in AI-Generated Scientific Simulation, by Chengshuai Yang View PDF HTML (experimental) Abstract:Large language models can generate scientific simulation code, but the generated code silently fails on most non-textbook problems. We show that classical mathematical validation -- well-posedness, convergence, and error certification -- can be fully automated by a Judge Agent, reducing the silent-failure rate from 42% to 1.5% across 134 test cases spanning 12 scientific domains. The headline result comes from a prospective benchmark: 72 blinded tasks submitted by 12 independent scientists yield an 89% success rate (95% CI: [80%, 95%]) with automated error bounds, versus 53% without the Judge. On clinical CT (the only powered experiment, n = 200), the pipeline reaches 99% of expert quality. The residual 1.5% concentrates at bifurcation points where certifiability breaks down. We formalize this boundary through the simulability class S and introduce this http URL, a structured specification format that makes any scientific computation problem machine-readable and solver-independent. Code, data, and all 72 benchmark tasks are publicly archived. Comments: Subjects: Software Engineering (cs.SE); Machine Learning (cs.L...