π 3LM: A Benchmark for Arabic LLMs in STEM and Code
Summary
The article introduces 3LM, a benchmark designed to evaluate Arabic LLMs in STEM and coding, addressing gaps in existing assessments focused on general tasks.
Why It Matters
As Arabic LLMs gain traction, the 3LM benchmark fills a critical void by specifically assessing performance in STEM and coding, which are vital for educational and technical applications. This initiative supports the development of more capable AI systems tailored to Arabic-speaking users, enhancing their utility in real-world scenarios.
Key Takeaways
- 3LM is the first benchmark for evaluating Arabic LLMs in STEM and coding.
- It includes three datasets: Native STEM MCQs, Synthetic STEM MCQs, and Arabic Code Benchmarks.
- The benchmark aims to enhance the assessment of reasoning and coding skills in Arabic LLMs.
Back to Articles π 3LM: A Benchmark for Arabic LLMs in STEM and Code Team Article Published August 1, 2025 Upvote 7 +1 Basma Boussaha basma-b Follow tiiuae Leen AlQadi LeenAlQadi Follow tiiuae Mughaira Mughaira Follow tiiuae Shaikha Alsuwaidi Shaikha710 Follow tiiuae Giulia Campesan gcampesan Follow tiiuae Ahmed Alzubaidi amztheory Follow tiiuae Mohammed Alyafeai Alyafeai Follow tiiuae Hakim Hacid HakimHacid Follow tiiuae π Paper on arXiv | π¦ Datasets on HuggingFace | π§ Code on GitHub Why 3LM? Arabic Large Language Models (LLMs) have seen notable progress in recent years, yet existing benchmarks fall short when it comes to evaluating performance in high-value technical domains. Most evaluations to date have focused on general-purpose tasks like summarization, sentiment analysis, or generic question answering. However, scientific reasoning and programming are essential for a broad range of real-world applications, from education to technical problem-solving. To address this gap, we introduce 3LM (ΨΉΩΩ ), a multi-component benchmark tailored to evaluate Arabic LLMs on STEM (Science, Technology, Engineering, and Mathematics) subjects and code generation. 3LM is the first benchmark of its kind, designed specifically to test Arabic models in structured reasoning and formal logic which are domains traditionally underrepresented in Arabic NLP. Whatβs in the Benchmark? 3LM is made up of three datasets, each targeting a specific evaluation axis: real-world multiple-choice STEM questi...