[2603.25133] RubricEval: A Rubric-Level Meta-Evaluation Benchmark for LLM Judges in Instruction Following
About this article
Abstract page for arXiv paper 2603.25133: RubricEval: A Rubric-Level Meta-Evaluation Benchmark for LLM Judges in Instruction Following
Computer Science > Artificial Intelligence arXiv:2603.25133 (cs) [Submitted on 26 Mar 2026] Title:RubricEval: A Rubric-Level Meta-Evaluation Benchmark for LLM Judges in Instruction Following Authors:Tianjun Pan, Xuan Lin, Wenyan Yang, Qianyu He, Shisong Chen, Licai Qi, Wanqing Xu, Hongwei Feng, Bo Xu, Yanghua Xiao View a PDF of the paper titled RubricEval: A Rubric-Level Meta-Evaluation Benchmark for LLM Judges in Instruction Following, by Tianjun Pan and 9 other authors View PDF HTML (experimental) Abstract:Rubric-based evaluation has become a prevailing paradigm for evaluating instruction following in large language models (LLMs). Despite its widespread use, the reliability of these rubric-level evaluations remains unclear, calling for meta-evaluation. However, prior meta-evaluation efforts largely focus on the response level, failing to assess the fine-grained judgment accuracy that rubric-based evaluation relies on. To bridge this gap, we introduce RubricEval. Our benchmark features: (1) the first rubric-level meta-evaluation benchmark for instruction following, (2) diverse instructions and responses spanning multiple categories and model sources, and (3) a substantial set of 3,486 quality-controlled instances, along with Easy/Hard subsets that better differentiates judge performance. Our experiments reveal that rubric-level judging remains far from solved: even GPT-4o, a widely adopted judge in instruction-following benchmarks, achieves only 55.97% on Hard subset. Con...