[2510.18560] WebDevJudge: Evaluating (M)LLMs as Critiques for Web Development Quality
About this article
Abstract page for arXiv paper 2510.18560: WebDevJudge: Evaluating (M)LLMs as Critiques for Web Development Quality
Computer Science > Software Engineering arXiv:2510.18560 (cs) [Submitted on 21 Oct 2025 (v1), last revised 2 Mar 2026 (this version, v2)] Title:WebDevJudge: Evaluating (M)LLMs as Critiques for Web Development Quality Authors:Chunyang Li, Yilun Zheng, Xinting Huang, Tianqing Fang, Jiahao Xu, Lihui Chen, Yangqiu Song, Han Hu View a PDF of the paper titled WebDevJudge: Evaluating (M)LLMs as Critiques for Web Development Quality, by Chunyang Li and 7 other authors View PDF HTML (experimental) Abstract:The paradigm of LLM-as-a-judge is emerging as a scalable and efficient alternative to human evaluation, demonstrating strong performance on well-defined tasks. However, its reliability in open-ended tasks with dynamic environments and complex interactions remains unexplored. To bridge the gap, we introduce WebDevJudge, a systematic benchmark for assessing LLM-as-a-judge performance in web development, with support for both non-interactive evaluation based on static observations and continuous interactive evaluation with a dynamic web environment. WebDevJudge comprises human preference labels over paired web implementations, annotated with structured and query-grounded rubrics to ensure high-quality ground truth. Using this benchmark, we comprehensively evaluate various evaluators, including LLMs, MLLMs, and agentic workflows. We systematically investigate the impact of different paradigms and guidance mechanisms. Our experiments reveal a significant gap between LLM judges and hum...