[2602.13075] Unified Multi-Domain Graph Pre-training for Homogeneous and Heterogeneous Graphs via Domain-Specific Expert Encoding

[2602.13075] Unified Multi-Domain Graph Pre-training for Homogeneous and Heterogeneous Graphs via Domain-Specific Expert Encoding

arXiv - Machine Learning 4 min read Article

Summary

This paper presents a unified approach to graph pre-training that effectively integrates both homogeneous and heterogeneous graphs, addressing the limitations of existing methods.

Why It Matters

Graph pre-training is crucial for enhancing the performance of machine learning models on various tasks. This research addresses the gap in current methodologies that treat homogeneous and heterogeneous graphs separately, which is increasingly relevant in real-world applications where mixed graph types are common. By proposing a unified framework, the study enhances the adaptability and effectiveness of graph-based models across diverse domains.

Key Takeaways

  • Introduces a unified multi-domain graph pre-training method ($GPH^{2}$) for both homogeneous and heterogeneous graphs.
  • Demonstrates that a balanced mixture of graph types improves performance on downstream tasks.
  • Proposes domain-specific expert encoding to mitigate cross-domain discrepancies.
  • Implements a Task-oriented Expert Fusion Strategy to leverage the strengths of multiple experts.
  • Shows significant performance improvements over existing graph pre-training methods in mixed graph scenarios.

Computer Science > Machine Learning arXiv:2602.13075 (cs) [Submitted on 13 Feb 2026] Title:Unified Multi-Domain Graph Pre-training for Homogeneous and Heterogeneous Graphs via Domain-Specific Expert Encoding Authors:Chundong Liang, Yongqi Huang, Dongxiao He, Peiyuan Li, Yawen Li, Di Jin, Weixiong Zhang View a PDF of the paper titled Unified Multi-Domain Graph Pre-training for Homogeneous and Heterogeneous Graphs via Domain-Specific Expert Encoding, by Chundong Liang and 6 other authors View PDF HTML (experimental) Abstract:Graph pre-training has achieved remarkable success in recent years, delivering transferable representations for downstream adaptation. However, most existing methods are designed for either homogeneous or heterogeneous graphs, thereby hindering unified graph modeling across diverse graph types. This separation contradicts real-world applications, where mixed homogeneous and heterogeneous graphs are ubiquitous, and distribution shifts between upstream pre-training and downstream deployment are common. In this paper, we empirically demonstrate that a balanced mixture of homogeneous and heterogeneous graph pre-training benefits downstream tasks and propose a unified multi-domain \textbf{G}raph \textbf{P}re-training method across \textbf{H}omogeneous and \textbf{H}eterogeneous graphs ($\mathbf{GPH^{2}}$). To address the lack of a unified encoder for homogeneous and heterogeneous graphs, we propose a Unified Multi-View Graph Construction that simultaneously e...

Related Articles

Machine Learning

Can I trick a public AI to spit out an outcome I prefer?

I am aware of an organization that evaluates proposals by feeding them into a public version of AI. Is there a way to make that AI rate m...

Reddit - Artificial Intelligence · 1 min ·
Llms

Curated 550+ free AI tools useful for building projects (LLMs, APIs, local models, RAG, agents)

Over the last few days I was collecting free or low cost AI tools that are actually useful if you want to build stuff, not just try rando...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

Artificial intelligence - Machine Learning, Robotics, Algorithms

AI Events ·
Machine Learning

Fed Chair Jerome Powell, Treasury's Bessent and top bank CEOs met over Anthropic's Mythos model

submitted by /u/esporx [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime