[2602.22554] Multilingual Safety Alignment Via Sparse Weight Editing
Summary
This paper presents a novel framework for aligning safety measures in multilingual large language models (LLMs) through Sparse Weight Editing, addressing disparities in safety across languages.
Why It Matters
As LLMs become more prevalent, ensuring safety across various languages is crucial, especially for low-resource languages that often lack robust safety measures. This research proposes an efficient method to enhance safety without extensive computational resources, making it relevant for developers and researchers in AI safety.
Key Takeaways
- Proposes a training-free alignment framework using Sparse Weight Editing.
- Addresses safety disparities in low-resource languages compared to high-resource languages.
- Demonstrates significant reduction in Attack Success Rate (ASR) with minimal impact on general reasoning capabilities.
Computer Science > Machine Learning arXiv:2602.22554 (cs) [Submitted on 26 Feb 2026] Title:Multilingual Safety Alignment Via Sparse Weight Editing Authors:Jiaming Liang, Zhaoxin Wang, Handing Wang View a PDF of the paper titled Multilingual Safety Alignment Via Sparse Weight Editing, by Jiaming Liang and 2 other authors View PDF HTML (experimental) Abstract:Large Language Models (LLMs) exhibit significant safety disparities across languages, with low-resource languages (LRLs) often bypassing safety guardrails established for high-resource languages (HRLs) like English. Existing solutions, such as multilingual supervised fine-tuning (SFT) or Reinforcement Learning from Human Feedback (RLHF), are computationally expensive and dependent on scarce multilingual safety data. In this work, we propose a novel, training-free alignment framework based on Sparse Weight Editing. Identifying that safety capabilities are localized within a sparse set of safety neurons, we formulate the cross-lingual alignment problem as a constrained linear transformation. We derive a closed-form solution to optimally map the harmful representations of LRLs to the robust safety subspaces of HRLs, while preserving general utility via a null-space projection constraint. Extensive experiments across 8 languages and multiple model families (Llama-3, Qwen-2.5) demonstrate that our method substantially reduces Attack Success Rate (ASR) in LRLs with negligible impact on general reasoning capabilities, all achi...