[2603.04410] SalamahBench: Toward Standardized Safety Evaluation for Arabic Language Models
About this article
Abstract page for arXiv paper 2603.04410: SalamahBench: Toward Standardized Safety Evaluation for Arabic Language Models
Computer Science > Computation and Language arXiv:2603.04410 (cs) [Submitted on 3 Feb 2026] Title:SalamahBench: Toward Standardized Safety Evaluation for Arabic Language Models Authors:Omar Abdelnasser, Fatemah Alharbi, Khaled Khasawneh, Ihsen Alouani, Mohammed E. Fouda View a PDF of the paper titled SalamahBench: Toward Standardized Safety Evaluation for Arabic Language Models, by Omar Abdelnasser and 4 other authors View PDF Abstract:Safety alignment in Language Models (LMs) is fundamental for trustworthy AI. However, while different stakeholders are trying to leverage Arabic Language Models (ALMs), systematic safety evaluation of ALMs remains largely underexplored, limiting their mainstream uptake. Existing safety benchmarks and safeguard models are predominantly English-centric, limiting their applicability to Arabic Natural Language Processing (NLP) systems and obscuring fine-grained, category-level safety vulnerabilities. This paper introduces SalamaBench, a unified benchmark for evaluating the safety of ALMs, comprising $8,170$ prompts across $12$ different categories aligned with the MLCommons Safety Hazard Taxonomy. Constructed by harmonizing heterogeneous datasets through a rigorous pipeline involving AI filtering and multi-stage human verification, SalamaBench enables standardized, category-aware safety evaluation. Using this benchmark, we evaluate five state-of-the-art ALMs, including Fanar 1 and 2, ALLaM 2, Falcon H1R, and Jais 2, under multiple safeguard conf...