[2603.25222] Translation or Recitation? Calibrating Evaluation Scores for Machine Translation of Extremely Low-Resource Languages
About this article
Abstract page for arXiv paper 2603.25222: Translation or Recitation? Calibrating Evaluation Scores for Machine Translation of Extremely Low-Resource Languages
Computer Science > Computation and Language arXiv:2603.25222 (cs) [Submitted on 26 Mar 2026] Title:Translation or Recitation? Calibrating Evaluation Scores for Machine Translation of Extremely Low-Resource Languages Authors:Danlu Chen, Ka Sing He, Jiahe Tian, Chenghao Xiao, Zhaofeng Wu, Taylor Berg-Kirkpatrick, Freda Shi View a PDF of the paper titled Translation or Recitation? Calibrating Evaluation Scores for Machine Translation of Extremely Low-Resource Languages, by Danlu Chen and 6 other authors View PDF Abstract:The landscape of extremely low-resource machine translation (MT) is characterized by perplexing variability in reported performance, often making results across different language pairs difficult to contextualize. For researchers focused on specific language groups -- such as ancient languages -- it is nearly impossible to determine if breakthroughs reported in other contexts (e.g., native African or American languages) result from superior methodologies or are merely artifacts of benchmark collection. To address this problem, we introduce the FRED Difficulty Metrics, which include the Fertility Ratio (F), Retrieval Proxy (R), Pre-training Exposure (E), and Corpus Diversity (D) and serve as dataset-intrinsic metrics to contextualize reported scores. These metrics reveal that a significant portion of result variability is explained by train-test overlap and pre-training exposure rather than model capability. Additionally, we identify that some languages -- par...