[2603.21362] AdaRubric: Task-Adaptive Rubrics for LLM Agent Evaluation
About this article
Abstract page for arXiv paper 2603.21362: AdaRubric: Task-Adaptive Rubrics for LLM Agent Evaluation
Computer Science > Artificial Intelligence arXiv:2603.21362 (cs) [Submitted on 22 Mar 2026] Title:AdaRubric: Task-Adaptive Rubrics for LLM Agent Evaluation Authors:Liang Ding View a PDF of the paper titled AdaRubric: Task-Adaptive Rubrics for LLM Agent Evaluation, by Liang Ding View PDF HTML (experimental) Abstract:LLM-as-Judge evaluation fails agent tasks because a fixed rubric cannot capture what matters for this task: code debugging demands Correctness and Error Handling; web navigation demands Goal Alignment and Action Efficiency. We present ADARUBRIC, which closes this gap by generating task-specific evaluation rubrics on the fly from task descriptions, scoring trajectories step-by-step with confidence-weighted per-dimension feedback, and filtering preference pairs with the novel DimensionAwareFilter - a provably necessary condition for preventing high-scoring dimensions from masking dimension-level failures. On WebArena and ToolBench, ADARUBRIC achieves Pearson r=0.79 human correlation (+0.16 over the best static baseline) with deployment-grade reliability (Krippendorff's $\alpha$=0.83). DPO agents trained on ADARUBRIC preference pairs gain +6.8 to +8.5 pp task success over Prometheus across three benchmarks; gains transfer to SWE-bench code repair (+4.9 pp) and accelerate PPO convergence by +6.6 pp at 5K steps - both without any rubric engineering. Code: this https URL. Subjects: Artificial Intelligence (cs.AI); Computation and Language (cs.CL) Cite as: arXiv:2603.2...