[2507.01785] MuRating: A High Quality Data Selecting Approach to Multilingual Large Language Model Pretraining
About this article
Abstract page for arXiv paper 2507.01785: MuRating: A High Quality Data Selecting Approach to Multilingual Large Language Model Pretraining
Computer Science > Computation and Language arXiv:2507.01785 (cs) [Submitted on 2 Jul 2025 (v1), last revised 5 Mar 2026 (this version, v3)] Title:MuRating: A High Quality Data Selecting Approach to Multilingual Large Language Model Pretraining Authors:Zhixun Chen, Ping Guo, Wenhan Han, Yifan Zhang, Binbin Liu, Haobin Lin, Fengze Liu, Yan Zhao, Bingni Zhang, Taifeng Wang, Yin Zheng, Trevor Cohn, Meng Fang View a PDF of the paper titled MuRating: A High Quality Data Selecting Approach to Multilingual Large Language Model Pretraining, by Zhixun Chen and 12 other authors View PDF HTML (experimental) Abstract:Data quality is a critical driver of large language model performance, yet existing model-based selection methods focus almost exclusively on English. We introduce MuRating, a scalable framework that transfers high-quality English data-quality signals into a single rater for 17 target languages. MuRating aggregates multiple English "raters" via pairwise comparisons to learn unified document-quality scores,then projects these judgments through translation to train a multilingual evaluator on monolingual, cross-lingual, and parallel text pairs. Applied to web data, MuRating selects balanced subsets of English and multilingual content to pretrain a 1.2 B-parameter LLaMA model. Compared to strong baselines, including QuRater, AskLLM, DCLM and so on, our approach boosts average accuracy on both English benchmarks and multilingual evaluations, with especially large gains on kno...