[2602.12587] Multi-Head Attention as a Source of Catastrophic Forgetting in MoE Transformers

[2602.12587] Multi-Head Attention as a Source of Catastrophic Forgetting in MoE Transformers

arXiv - Machine Learning 4 min read Article

Summary

The paper discusses how multi-head attention in Mixture-of-Experts (MoE) Transformers contributes to catastrophic forgetting, proposing a new routing method to mitigate this issue.

Why It Matters

Understanding catastrophic forgetting in machine learning models, particularly in MoE Transformers, is crucial for developing systems that can learn continuously without losing prior knowledge. This research provides insights that could enhance the design of more effective AI systems, particularly in continual learning scenarios.

Key Takeaways

  • MoE Transformers experience significant forgetting despite sparse routing.
  • The pre-routing bottleneck in multi-head attention leads to ineffective routing due to feature composition collisions.
  • Introducing MH-MoE improves routing granularity and reduces forgetting.
  • Quantifying the collision effect with an effective composition number helps in understanding model performance.
  • The proposed method shows a reduction in backward transfer loss, enhancing continual learning.

Computer Science > Machine Learning arXiv:2602.12587 (cs) [Submitted on 13 Feb 2026] Title:Multi-Head Attention as a Source of Catastrophic Forgetting in MoE Transformers Authors:Anrui Chen, Ruijun Huang, Xin Zhang, Fang Dong, Hengjie Cao, Zhendong Huang, Yifeng Yang, Mengyi Chen, Jixian Zhou, Mingzhi Dong, Yujiang Wang, Jinlong Hou, Qin Lv, Robert P. Dick, Yuan Cheng, Tun Lu, Fan Yang, Li Shang View a PDF of the paper titled Multi-Head Attention as a Source of Catastrophic Forgetting in MoE Transformers, by Anrui Chen and 17 other authors View PDF HTML (experimental) Abstract:Mixture-of-Experts (MoE) architectures are often considered a natural fit for continual learning because sparse routing should localize updates and reduce interference, yet MoE Transformers still forget substantially even with sparse, well-balanced expert utilization. We attribute this gap to a pre-routing bottleneck: multi-head attention concatenates head-specific signals into a single post-attention router input, forcing routing to act on co-occurring feature compositions rather than separable head channels. We show that this router input simultaneously encodes multiple separately decodable semantic and structural factors with uneven head support, and that different feature compositions induce weakly aligned parameter-gradient directions; as a result, routing maps many distinct compositions to the same route. We quantify this collision effect via a route-wise effective composition number $N_{eff}$ ...

Related Articles

A Machine Learning Engineer Thought He Was Safe From AI Layoffs. Then He Got Some Depressing News
Machine Learning

A Machine Learning Engineer Thought He Was Safe From AI Layoffs. Then He Got Some Depressing News

AI News - General · 4 min ·
UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
When AI training wheels help and hinder learning
Machine Learning

When AI training wheels help and hinder learning

AI News - General · 6 min ·
Sam Altman's Coworkers Say He Can Barely Code and Misunderstands Basic Machine Learning Concepts
Machine Learning

Sam Altman's Coworkers Say He Can Barely Code and Misunderstands Basic Machine Learning Concepts

AI News - General · 2 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime