[2512.02711] CREST: Universal Safety Guardrails Through Cluster-Guided Cross-Lingual Transfer
About this article
Abstract page for arXiv paper 2512.02711: CREST: Universal Safety Guardrails Through Cluster-Guided Cross-Lingual Transfer
Computer Science > Computation and Language arXiv:2512.02711 (cs) [Submitted on 2 Dec 2025 (v1), last revised 29 Mar 2026 (this version, v2)] Title:CREST: Universal Safety Guardrails Through Cluster-Guided Cross-Lingual Transfer Authors:Lavish Bansal, Naman Mishra View a PDF of the paper titled CREST: Universal Safety Guardrails Through Cluster-Guided Cross-Lingual Transfer, by Lavish Bansal and Naman Mishra View PDF HTML (experimental) Abstract:Ensuring content safety in large language models (LLMs) is essential for their deployment in real-world applications. However, existing safety guardrails are predominantly tailored for high-resource languages, leaving a significant portion of the world's population underrepresented who communicate in low-resource languages. To address this, we introduce CREST (CRoss-lingual Efficient Safety Transfer), a parameter-efficient multilingual safety classification model that supports 100 languages with only 0.5B parameters. By training on a strategically chosen subset of only 13 high-resource languages, our model utilizes cluster-based cross-lingual transfer from a few to 100 languages, enabling effective generalization to both unseen high-resource and low-resource languages. This approach addresses the challenge of limited training data in low-resource settings. We conduct comprehensive evaluations across six safety benchmarks to demonstrate that CREST outperforms existing state-of-the-art guardrails of comparable scale and achieves comp...