[2603.27685] CrossHGL: A Text-Free Foundation Model for Cross-Domain Heterogeneous Graph Learning
About this article
Abstract page for arXiv paper 2603.27685: CrossHGL: A Text-Free Foundation Model for Cross-Domain Heterogeneous Graph Learning
Computer Science > Machine Learning arXiv:2603.27685 (cs) [Submitted on 29 Mar 2026] Title:CrossHGL: A Text-Free Foundation Model for Cross-Domain Heterogeneous Graph Learning Authors:Xuanze Chen, Jiajun Zhou, Yadong Li, Shanqing Yu, Qi Xuan View a PDF of the paper titled CrossHGL: A Text-Free Foundation Model for Cross-Domain Heterogeneous Graph Learning, by Xuanze Chen and 4 other authors View PDF HTML (experimental) Abstract:Heterogeneous graph representation learning (HGRL) is essential for modeling complex systems with diverse node and edge types. However, most existing methods are limited to closed-world settings with shared schemas and feature spaces, hindering cross-domain generalization. While recent graph foundation models improve transferability, they often target homogeneous graphs, rely on domain-specific schemas, or require rich textual attributes. Consequently, text-free and few-shot cross-domain HGRL remains underexplored. To address this, we propose CrossHGL, a foundation framework that preserves and transfers multi-relational structural semantics without external textual supervision. Specifically, a semantic-preserving transformation strategy homogenizes heterogeneous graphs while encoding interaction semantics into edge features. Based on this, a prompt-aware multi-domain pre-training framework with a Tri-Prompt mechanism captures transferable knowledge across feature, edge, and structure perspectives via self-supervised contrastive learning. For target-...