[2603.19273] LSR: Linguistic Safety Robustness Benchmark for Low-Resource West African Languages
About this article
Abstract page for arXiv paper 2603.19273: LSR: Linguistic Safety Robustness Benchmark for Low-Resource West African Languages
Computer Science > Computation and Language arXiv:2603.19273 (cs) [Submitted on 27 Feb 2026] Title:LSR: Linguistic Safety Robustness Benchmark for Low-Resource West African Languages Authors:Godwin Abuh Faruna View a PDF of the paper titled LSR: Linguistic Safety Robustness Benchmark for Low-Resource West African Languages, by Godwin Abuh Faruna View PDF HTML (experimental) Abstract:Safety alignment in large language models relies predominantly on English-language training data. When harmful intent is expressed in low-resource languages, refusal mechanisms that hold in English frequently fail to activate. We introduce LSR (Linguistic Safety Robustness), the first systematic benchmark for measuring cross-lingual refusal degradation in West African languages: Yoruba, Hausa, Igbo, and Igala. LSR uses a dual-probe evaluation protocol - submitting matched English and target-language probes to the same model - and introduces Refusal Centroid Drift (RCD), a metric that quantifies how much of a model's English refusal behavior is lost when harmful intent is encoded in a target language. We evaluate Gemini 2.5 Flash across 14 culturally grounded attack probes in four harm categories. English refusal rates hold at approximately 90 percent. Across West African languages, refusal rates fall to 35-55 percent, with Igala showing the most severe degradation (RCD = 0.55). LSR is implemented in the Inspect AI evaluation framework and is available as a PR-ready contribution to the UK AISI's...