[2412.13091] LMUnit: Fine-grained Evaluation with Natural Language Unit Tests
About this article
Abstract page for arXiv paper 2412.13091: LMUnit: Fine-grained Evaluation with Natural Language Unit Tests
Computer Science > Computation and Language arXiv:2412.13091 (cs) [Submitted on 17 Dec 2024 (v1), last revised 4 Mar 2026 (this version, v2)] Title:LMUnit: Fine-grained Evaluation with Natural Language Unit Tests Authors:Jon Saad-Falcon, Rajan Vivek, William Berrios, Nandita Shankar Naik, Matija Franklin, Bertie Vidgen, Amanpreet Singh, Douwe Kiela, Shikib Mehri View a PDF of the paper titled LMUnit: Fine-grained Evaluation with Natural Language Unit Tests, by Jon Saad-Falcon and 8 other authors View PDF Abstract:As language models become integral to critical workflows, assessing their behavior remains a fundamental challenge -- human evaluation is costly and noisy, while automated metrics provide only coarse, difficult-to-interpret signals. We introduce natural language unit tests, a paradigm that decomposes response quality into explicit, testable criteria, along with a unified scoring model, LMUnit, which combines multi-objective training across preferences, direct ratings, and natural language rationales. Through controlled human studies, we show this paradigm significantly improves inter-annotator agreement and enables more effective LLM development workflows. LMUnit achieves state-of-the-art performance on evaluation benchmarks (FLASK, BigGenBench) and competitive results on RewardBench. These results validate both our proposed paradigm and scoring model, suggesting a promising path forward for language model evaluation and development. Subjects: Computation and Lang...