[2511.05876] MoEGCL: Mixture of Ego-Graphs Contrastive Representation Learning for Multi-View Clustering
About this article
Abstract page for arXiv paper 2511.05876: MoEGCL: Mixture of Ego-Graphs Contrastive Representation Learning for Multi-View Clustering
Computer Science > Computer Vision and Pattern Recognition arXiv:2511.05876 (cs) [Submitted on 8 Nov 2025 (v1), last revised 24 Mar 2026 (this version, v5)] Title:MoEGCL: Mixture of Ego-Graphs Contrastive Representation Learning for Multi-View Clustering Authors:Jian Zhu, Xin Zou, Jun Sun, Cheng Luo, Lei Liu, Lingfang Zeng, Ning Zhang, Bian Wu, Chang Tang, Lirong Dai View a PDF of the paper titled MoEGCL: Mixture of Ego-Graphs Contrastive Representation Learning for Multi-View Clustering, by Jian Zhu and 9 other authors View PDF HTML (experimental) Abstract:In recent years, the advancement of Graph Neural Networks (GNNs) has significantly propelled progress in Multi-View Clustering (MVC). However, existing methods face the problem of coarse-grained graph fusion. Specifically, current approaches typically generate a separate graph structure for each view and then perform weighted fusion of graph structures at the view level, which is a relatively rough strategy. To address this limitation, we present a novel Mixture of Ego-Graphs Contrastive Representation Learning (MoEGCL). It mainly consists of two modules. In particular, we propose an innovative Mixture of Ego-Graphs Fusion (MoEGF), which constructs ego graphs and utilizes a Mixture-of-Experts network to implement fine-grained fusion of ego graphs at the sample level, rather than the conventional view-level fusion. Additionally, we present the Ego Graph Contrastive Learning (EGCL) module to align the fused representation...