[2501.15461] Mamba-Based Graph Convolutional Networks: Tackling Over-smoothing with Selective State Space
Summary
The paper introduces MbaGCN, a novel graph convolutional network architecture designed to address the over-smoothing problem in deep GNNs by utilizing selective state space techniques.
Why It Matters
As graph neural networks (GNNs) become increasingly important in various applications, addressing the over-smoothing issue is crucial for enhancing their performance. MbaGCN represents a significant advancement in GNN design, potentially influencing future research and applications in graph-based learning.
Key Takeaways
- MbaGCN introduces a new architecture for GNNs to combat over-smoothing.
- The model incorporates three innovative components for better neighborhood information aggregation.
- Experimental results suggest MbaGCN may not always outperform existing methods but lays a foundation for future advancements.
Computer Science > Machine Learning arXiv:2501.15461 (cs) [Submitted on 26 Jan 2025 (v1), last revised 21 Feb 2026 (this version, v3)] Title:Mamba-Based Graph Convolutional Networks: Tackling Over-smoothing with Selective State Space Authors:Xin He, Yili Wang, Wenqi Fan, Xu Shen, Xin Juan, Rui Miao, Xin Wang View a PDF of the paper titled Mamba-Based Graph Convolutional Networks: Tackling Over-smoothing with Selective State Space, by Xin He and 6 other authors View PDF HTML (experimental) Abstract:Graph Neural Networks (GNNs) have shown great success in various graph-based learning tasks. However, it often faces the issue of over-smoothing as the model depth increases, which causes all node representations to converge to a single value and become indistinguishable. This issue stems from the inherent limitations of GNNs, which struggle to distinguish the importance of information from different neighborhoods. In this paper, we introduce MbaGCN, a novel graph convolutional architecture that draws inspiration from the Mamba paradigm-originally designed for sequence modeling. MbaGCN presents a new backbone for GNNs, consisting of three key components: the Message Aggregation Layer, the Selective State Space Transition Layer, and the Node State Prediction Layer. These components work in tandem to adaptively aggregate neighborhood information, providing greater flexibility and scalability for deep GNN models. While MbaGCN may not consistently outperform all existing methods on e...