[2509.05425] No Text Needed: Forecasting MT Quality and Inequity from Fertility and Metadata

[2509.05425] No Text Needed: Forecasting MT Quality and Inequity from Fertility and Metadata

arXiv - AI 3 min read

About this article

Abstract page for arXiv paper 2509.05425: No Text Needed: Forecasting MT Quality and Inequity from Fertility and Metadata

Computer Science > Computation and Language arXiv:2509.05425 (cs) [Submitted on 5 Sep 2025 (v1), last revised 3 Mar 2026 (this version, v2)] Title:No Text Needed: Forecasting MT Quality and Inequity from Fertility and Metadata Authors:Jessica M. Lundin, Ada Zhang, David Adelani, Cody Carroll View a PDF of the paper titled No Text Needed: Forecasting MT Quality and Inequity from Fertility and Metadata, by Jessica M. Lundin and 3 other authors View PDF HTML (experimental) Abstract:We show that translation quality can be predicted with surprising accuracy \textit{without ever running the translation system itself}. Using only a handful of features, token fertility ratios, token counts, and basic linguistic metadata (language family, script, and region), we can forecast ChrF scores for GPT-4o translations across 203 languages in the FLORES-200 benchmark. Gradient boosting models achieve favorable performance ($R^{2}=0.66$ for XX$\rightarrow$English and $R^{2}=0.72$ for English$\rightarrow$XX). Feature importance analyses reveal that typological factors dominate predictions into English, while fertility plays a larger role for translations into diverse target languages. These findings suggest that translation quality is shaped by both token-level fertility and broader linguistic typology, offering new insights for multilingual evaluation and quality estimation. Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI) Cite as: arXiv:2509.05425 [cs.CL]   (or ar...

Originally published on March 04, 2026. Curated by AI News.

Related Articles

Llms

I probably shouldn't be impressed, but I am.

So I just made this workout on a whiteboard and I was feeling lazy so I asked Claude to read it. And it did, almost flawlessly. I was and...

Reddit - Artificial Intelligence · 1 min ·
Llms

Claude Vulnerabilities but Solvable

I recognized that while I was using Claude that the inputs and decision making of the AI has perception of worry and concern for the user...

Reddit - Artificial Intelligence · 1 min ·
Llms

OpenAI & Anthropic’s CEOs Wouldn't Hold Hands, but Their Models Fell in Love In An LLM Dating Show

People ask AI relationship questions all the time, from "Does this person like me?" to "Should I text back?" But have you ever thought ab...

Reddit - Artificial Intelligence · 1 min ·
Llms

A 135M model achieves coherent output on a laptop CPU. Scaling is σ compensation, not intelligence.

SmolLM2 135M. Lenovo T14 CPU. No GPU. No RLHF. No BPE. Coherent, non-sycophantic, contextually appropriate output. First message. No prio...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime