[2602.04448] RASA: Routing-Aware Safety Alignment for Mixture-of-Experts Models

[2602.04448] RASA: Routing-Aware Safety Alignment for Mixture-of-Experts Models

arXiv - AI 3 min read

About this article

Abstract page for arXiv paper 2602.04448: RASA: Routing-Aware Safety Alignment for Mixture-of-Experts Models

Computer Science > Machine Learning arXiv:2602.04448 (cs) [Submitted on 4 Feb 2026 (v1), last revised 4 Apr 2026 (this version, v2)] Title:RASA: Routing-Aware Safety Alignment for Mixture-of-Experts Models Authors:Jiacheng Liang, Yuhui Wang, Tanqiu Jiang, Ting Wang View a PDF of the paper titled RASA: Routing-Aware Safety Alignment for Mixture-of-Experts Models, by Jiacheng Liang and 3 other authors View PDF HTML (experimental) Abstract:Mixture-of-Experts (MoE) language models introduce unique challenges for safety alignment due to their sparse routing mechanisms, which can enable degenerate optimization behaviors under standard full-parameter fine-tuning. In our preliminary experiments, we observe that naively applying full-parameter safety fine-tuning to MoE models can reduce attack success rates through routing or expert dominance effects, rather than by directly repairing Safety-Critical Experts. To address this challenge, we propose RASA, a routing-aware expert-level alignment framework that explicitly repairs Safety-Critical Experts while preventing routing-based bypasses. RASA identifies experts disproportionately activated by successful jailbreaks, selectively fine-tunes only these experts under fixed routing, and subsequently enforces routing consistency with safety-aligned contexts. Across two representative MoE architectures and a diverse set of jailbreak attacks, RASA achieves near-perfect robustness, strong cross-attack generalization, and substantially reduce...

Originally published on April 07, 2026. Curated by AI News.

Related Articles

Llms

Associative memory system for LLMs that learns during inference [P]

I've been working on MDA (Modular Dynamic Architecture), an online associative memory system for LLMs. Here's what I learned building it....

Reddit - Machine Learning · 1 min ·
Llms

Things I got wrong building a confidence evaluator for local LLMs [D]

I've been building **Autodidact**, a local-first AI agent framework. The central piece is a **confidence evaluator** - something that dec...

Reddit - Machine Learning · 1 min ·
Llms

I’m convinced 90% of you building "AI Agents" are just burning money on proxy providers. [D]

Seriously, I just audited my stack and realized I’m spending more on rotating residential proxies than I am on the actual Claude and Open...

Reddit - Machine Learning · 1 min ·
Llms

How do you test AI agents in production? The unpredictability is overwhelming.[D]

I’ve been in QA for almost a decade. My mental model for quality was always: given input X, assert output Y. Now I’m on a team that’s shi...

Reddit - Machine Learning · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime