[2603.19921] Span-Level Machine Translation Meta-Evaluation
About this article
Abstract page for arXiv paper 2603.19921: Span-Level Machine Translation Meta-Evaluation
Computer Science > Computation and Language arXiv:2603.19921 (cs) [Submitted on 20 Mar 2026] Title:Span-Level Machine Translation Meta-Evaluation Authors:Stefano Perrella, Eric Morales Agostinho, Hugo Zaragoza View a PDF of the paper titled Span-Level Machine Translation Meta-Evaluation, by Stefano Perrella and 2 other authors View PDF HTML (experimental) Abstract:Machine Translation (MT) and automatic MT evaluation have improved dramatically in recent years, enabling numerous novel applications. Automatic evaluation techniques have evolved from producing scalar quality scores to precisely locating translation errors and assigning them error categories and severity levels. However, it remains unclear how to reliably measure the evaluation capabilities of auto-evaluators that do error detection, as no established technique exists in the literature. This work investigates different implementations of span-level precision, recall, and F-score, showing that seemingly similar approaches can yield substantially different rankings, and that certain widely-used techniques are unsuitable for evaluating MT error detection. We propose "match with partial overlap and partial credit" (MPP) with micro-averaging as a robust meta-evaluation strategy and release code for its use publicly. Finally, we use MPP to assess the state of the art in MT error detection. Comments: Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI) Cite as: arXiv:2603.19921 [cs.CL] (or arXi...