[2602.12587] Multi-Head Attention as a Source of Catastrophic Forgetting in MoE Transformers
Summary
The paper discusses how multi-head attention in Mixture-of-Experts (MoE) Transformers contributes to catastrophic forgetting, proposing a new routing method to mitigate this issue.
Why It Matters
Understanding catastrophic forgetting in machine learning models, particularly in MoE Transformers, is crucial for developing systems that can learn continuously without losing prior knowledge. This research provides insights that could enhance the design of more effective AI systems, particularly in continual learning scenarios.
Key Takeaways
- MoE Transformers experience significant forgetting despite sparse routing.
- The pre-routing bottleneck in multi-head attention leads to ineffective routing due to feature composition collisions.
- Introducing MH-MoE improves routing granularity and reduces forgetting.
- Quantifying the collision effect with an effective composition number helps in understanding model performance.
- The proposed method shows a reduction in backward transfer loss, enhancing continual learning.
Computer Science > Machine Learning arXiv:2602.12587 (cs) [Submitted on 13 Feb 2026] Title:Multi-Head Attention as a Source of Catastrophic Forgetting in MoE Transformers Authors:Anrui Chen, Ruijun Huang, Xin Zhang, Fang Dong, Hengjie Cao, Zhendong Huang, Yifeng Yang, Mengyi Chen, Jixian Zhou, Mingzhi Dong, Yujiang Wang, Jinlong Hou, Qin Lv, Robert P. Dick, Yuan Cheng, Tun Lu, Fan Yang, Li Shang View a PDF of the paper titled Multi-Head Attention as a Source of Catastrophic Forgetting in MoE Transformers, by Anrui Chen and 17 other authors View PDF HTML (experimental) Abstract:Mixture-of-Experts (MoE) architectures are often considered a natural fit for continual learning because sparse routing should localize updates and reduce interference, yet MoE Transformers still forget substantially even with sparse, well-balanced expert utilization. We attribute this gap to a pre-routing bottleneck: multi-head attention concatenates head-specific signals into a single post-attention router input, forcing routing to act on co-occurring feature compositions rather than separable head channels. We show that this router input simultaneously encodes multiple separately decodable semantic and structural factors with uneven head support, and that different feature compositions induce weakly aligned parameter-gradient directions; as a result, routing maps many distinct compositions to the same route. We quantify this collision effect via a route-wise effective composition number $N_{eff}$ ...