[2505.19762] Language Models as Messengers: Enhancing Message Passing in Heterophilic Graph Learning
About this article
Abstract page for arXiv paper 2505.19762: Language Models as Messengers: Enhancing Message Passing in Heterophilic Graph Learning
Computer Science > Artificial Intelligence arXiv:2505.19762 (cs) [Submitted on 26 May 2025 (v1), last revised 27 Feb 2026 (this version, v2)] Title:Language Models as Messengers: Enhancing Message Passing in Heterophilic Graph Learning Authors:Dawei Cheng, Wenjun Wang, Mingjian Guang View a PDF of the paper titled Language Models as Messengers: Enhancing Message Passing in Heterophilic Graph Learning, by Dawei Cheng and 2 other authors View PDF HTML (experimental) Abstract:Graph neural networks (GNNs) have become a standard paradigm for graph representation learning, yet their message passing mechanism implicitly assumes that messages can be represented by source node embeddings, an assumption that fails in heterophilic graphs. While existing methods attempt to address heterophily through graph structure refinement or adaptation of neighbor aggregation, they often overlook the semantic potential of node text, relying on suboptimal message representation for propagation and compromise performance on homophilic graphs. To address these limitations, we propose LEMP4HG, a novel language model (LM)-enhanced message passing approach for heterophilic graph learning. Specifically, for text-attributed graphs (TAG), we leverage a LM to explicitly model inter-node semantic relationships from paired node texts, synthesizing semantically informed messages for propagation. To ensure practical efficiency, we further introduce an active learning-inspired strategy guided by a tailored heur...