[2604.08980] Neighbourhood Transformer: Switchable Attention for Monophily-Aware Graph Learning
About this article
Abstract page for arXiv paper 2604.08980: Neighbourhood Transformer: Switchable Attention for Monophily-Aware Graph Learning
Computer Science > Machine Learning arXiv:2604.08980 (cs) [Submitted on 10 Apr 2026] Title:Neighbourhood Transformer: Switchable Attention for Monophily-Aware Graph Learning Authors:Yi Luo, Xu Sun, Guangchun Luo, Aiguo Chen View a PDF of the paper titled Neighbourhood Transformer: Switchable Attention for Monophily-Aware Graph Learning, by Yi Luo and 3 other authors View PDF HTML (experimental) Abstract:Graph neural networks (GNNs) have been widely adopted in engineering applications such as social network analysis, chemical research and computer vision. However, their efficacy is severely compromised by the inherent homophily assumption, which fails to hold for heterophilic graphs where dissimilar nodes are frequently connected. To address this fundamental limitation in graph learning, we first draw inspiration from the recently discovered monophily property of real-world graphs, and propose Neighbourhood Transformers (NT), a novel paradigm that applies self-attention within every local neighbourhood instead of aggregating messages to the central node as in conventional message-passing GNNs. This design makes NT inherently monophily-aware and theoretically guarantees its expressiveness is no weaker than traditional message-passing frameworks. For practical engineering deployment, we further develop a neighbourhood partitioning strategy equipped with switchable attentions, which reduces the space consumption of NT by over 95% and time consumption by up to 92.67%, significa...