[2604.08980] Neighbourhood Transformer: Switchable Attention for Monophily-Aware Graph Learning

[2604.08980] Neighbourhood Transformer: Switchable Attention for Monophily-Aware Graph Learning

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2604.08980: Neighbourhood Transformer: Switchable Attention for Monophily-Aware Graph Learning

Computer Science > Machine Learning arXiv:2604.08980 (cs) [Submitted on 10 Apr 2026] Title:Neighbourhood Transformer: Switchable Attention for Monophily-Aware Graph Learning Authors:Yi Luo, Xu Sun, Guangchun Luo, Aiguo Chen View a PDF of the paper titled Neighbourhood Transformer: Switchable Attention for Monophily-Aware Graph Learning, by Yi Luo and 3 other authors View PDF HTML (experimental) Abstract:Graph neural networks (GNNs) have been widely adopted in engineering applications such as social network analysis, chemical research and computer vision. However, their efficacy is severely compromised by the inherent homophily assumption, which fails to hold for heterophilic graphs where dissimilar nodes are frequently connected. To address this fundamental limitation in graph learning, we first draw inspiration from the recently discovered monophily property of real-world graphs, and propose Neighbourhood Transformers (NT), a novel paradigm that applies self-attention within every local neighbourhood instead of aggregating messages to the central node as in conventional message-passing GNNs. This design makes NT inherently monophily-aware and theoretically guarantees its expressiveness is no weaker than traditional message-passing frameworks. For practical engineering deployment, we further develop a neighbourhood partitioning strategy equipped with switchable attentions, which reduces the space consumption of NT by over 95% and time consumption by up to 92.67%, significa...

Originally published on April 13, 2026. Curated by AI News.

Related Articles

Llms

I am not an "anti" like this guy, but still an interesting video of person interacting with chat 4o

(Posting Here because removed by Chatgpt Complaints moderators because the model here is 4o, and refuse to believe there were any safety ...

Reddit - Artificial Intelligence · 1 min ·
Llms

Unsolved AI Mystery Is Solved Along With Lessons Learned On Why ChatGPT Became Oddly Obsessed With Gremlins And Goblins

This article discusses the resolution of an AI mystery regarding ChatGPT's unusual focus on gremlins and goblins, along with insights gai...

AI Tools & Products · 1 min ·
[2602.06869] Uncovering Cross-Objective Interference in Multi-Objective Alignment
Llms

[2602.06869] Uncovering Cross-Objective Interference in Multi-Objective Alignment

Abstract page for arXiv paper 2602.06869: Uncovering Cross-Objective Interference in Multi-Objective Alignment

arXiv - Machine Learning · 3 min ·
[2604.07401] Geometric Entropy and Retrieval Phase Transitions in Continuous Thermal Dense Associative Memory
Machine Learning

[2604.07401] Geometric Entropy and Retrieval Phase Transitions in Continuous Thermal Dense Associative Memory

Abstract page for arXiv paper 2604.07401: Geometric Entropy and Retrieval Phase Transitions in Continuous Thermal Dense Associative Memory

arXiv - Machine Learning · 4 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime