[2602.04448] RASA: Routing-Aware Safety Alignment for Mixture-of-Experts Models
About this article
Abstract page for arXiv paper 2602.04448: RASA: Routing-Aware Safety Alignment for Mixture-of-Experts Models
Computer Science > Machine Learning arXiv:2602.04448 (cs) [Submitted on 4 Feb 2026 (v1), last revised 4 Apr 2026 (this version, v2)] Title:RASA: Routing-Aware Safety Alignment for Mixture-of-Experts Models Authors:Jiacheng Liang, Yuhui Wang, Tanqiu Jiang, Ting Wang View a PDF of the paper titled RASA: Routing-Aware Safety Alignment for Mixture-of-Experts Models, by Jiacheng Liang and 3 other authors View PDF HTML (experimental) Abstract:Mixture-of-Experts (MoE) language models introduce unique challenges for safety alignment due to their sparse routing mechanisms, which can enable degenerate optimization behaviors under standard full-parameter fine-tuning. In our preliminary experiments, we observe that naively applying full-parameter safety fine-tuning to MoE models can reduce attack success rates through routing or expert dominance effects, rather than by directly repairing Safety-Critical Experts. To address this challenge, we propose RASA, a routing-aware expert-level alignment framework that explicitly repairs Safety-Critical Experts while preventing routing-based bypasses. RASA identifies experts disproportionately activated by successful jailbreaks, selectively fine-tunes only these experts under fixed routing, and subsequently enforces routing consistency with safety-aligned contexts. Across two representative MoE architectures and a diverse set of jailbreak attacks, RASA achieves near-perfect robustness, strong cross-attack generalization, and substantially reduce...