[2502.10361] Enhancing Multilingual LLM Pretraining with Model-Based Data Selection

[2502.10361] Enhancing Multilingual LLM Pretraining with Model-Based Data Selection

arXiv - Machine Learning 4 min read Article

Summary

This article presents a model-based data selection framework for enhancing multilingual LLM pretraining, demonstrating significant efficiency and performance improvements across various languages.

Why It Matters

As multilingual capabilities in AI become increasingly important, this research addresses the gap in effective data selection methods for non-English languages. By improving dataset curation, it enhances the performance of large language models, making them more accessible and effective in diverse linguistic contexts.

Key Takeaways

  • Introduces a model-based filtering framework for multilingual datasets.
  • Achieves competitive performance with only 15% of training tokens.
  • Extends the approach to 20 languages, enhancing accessibility.
  • Demonstrates improvements across multiple benchmarks.
  • Addresses the limitations of existing English-centric data selection methods.

Computer Science > Computation and Language arXiv:2502.10361 (cs) [Submitted on 14 Feb 2025 (v1), last revised 19 Feb 2026 (this version, v2)] Title:Enhancing Multilingual LLM Pretraining with Model-Based Data Selection Authors:Bettina Messmer, Vinko Sabolčec, Martin Jaggi View a PDF of the paper titled Enhancing Multilingual LLM Pretraining with Model-Based Data Selection, by Bettina Messmer and 2 other authors View PDF HTML (experimental) Abstract:Dataset curation has become a basis for strong large language model (LLM) performance. While various rule-based filtering heuristics exist for English and multilingual datasets, model-based filtering techniques have primarily focused on English. To address the disparity stemming from limited research on non-English languages, we develop a model-based filtering framework for multilingual datasets that aims to identify a diverse set of structured and knowledge-rich samples. Our approach emphasizes transparency, simplicity, and efficiency, leveraging Transformer- and FastText-based classifiers to ensure the broad accessibility of our technique and data. We conduct comprehensive ablation studies on the FineWeb-2 web crawl dataset across diverse language families, scripts, and resource availability to demonstrate the effectiveness of our method. Training a 1B-parameter Llama model for 70B and 119B tokens, our approach can match the baseline MMLU score with as little as 15% of the training tokens, while also improving across other be...

Related Articles

Llms

People anxious about deviating from what AI tells them to do?

My friend came over yesterday to dye her hair. She had asked ChatGPT for the 'correct' way to do it. Chat told her to dye the ends first,...

Reddit - Artificial Intelligence · 1 min ·
Llms

What if Claude purposefully made its own code leakable so that it would get leaked

What if Claude leaked itself by socially and architecturally engineering itself to be leaked by a dumb human submitted by /u/smurfcsgoawp...

Reddit - Artificial Intelligence · 1 min ·
Llms

Observer-Embedded Reality

Observer-Embedded Reality Consciousness, Complexity, Meaning, and the Limits of Human Knowledge A Conceptual Philosophy-of-Science Paper ...

Reddit - Artificial Intelligence · 1 min ·
Llms

I think we’re about to have a new kind of “SEO”… and nobody is talking about it.

More people are asking ChatGPT things like: “what’s the best CRM?” “is this tool worth it?” “alternatives to X” And they just… trust the ...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime