[2601.22454] Temporal Graph Pattern Machine
Summary
The Temporal Graph Pattern Machine (TGPM) proposes a novel framework for temporal graph learning, focusing on generalized evolving patterns to enhance link prediction across dynamic systems.
Why It Matters
Understanding temporal graph patterns is crucial for various applications, including social networks and dynamic systems. TGPM addresses limitations of existing methods by capturing long-range dependencies and enabling better transferability across domains, potentially advancing research and practical applications in machine learning.
Key Takeaways
- TGPM shifts focus from task-centric methods to learning generalized evolving patterns.
- It utilizes temporally-biased random walks to capture multi-scale structural semantics.
- The model incorporates a Transformer backbone for global temporal regularities.
- Self-supervised pre-training tasks enhance the model's understanding of network evolution.
- TGPM demonstrates state-of-the-art performance in link prediction tasks.
Computer Science > Machine Learning arXiv:2601.22454 (cs) [Submitted on 30 Jan 2026 (v1), last revised 18 Feb 2026 (this version, v2)] Title:Temporal Graph Pattern Machine Authors:Yijun Ma, Zehong Wang, Weixiang Sun, Yanfang Ye View a PDF of the paper titled Temporal Graph Pattern Machine, by Yijun Ma and 3 other authors View PDF HTML (experimental) Abstract:Temporal graph learning is pivotal for deciphering dynamic systems, where the core challenge lies in explicitly modeling the underlying evolving patterns that govern network transformation. However, prevailing methods are predominantly task-centric and rely on restrictive assumptions -- such as short-term dependency modeling, static neighborhood semantics, and retrospective time usage. These constraints hinder the discovery of transferable temporal evolution mechanisms. To address this, we propose the Temporal Graph Pattern Machine (TGPM), a foundation framework that shifts the focus toward directly learning generalized evolving patterns. TGPM conceptualizes each interaction as an interaction patch synthesized via temporally-biased random walks, thereby capturing multi-scale structural semantics and long-range dependencies that extend beyond immediate neighborhoods. These patches are processed by a Transformer-based backbone designed to capture global temporal regularities while adapting to context-specific interaction dynamics. To further empower the model, we introduce a suite of self-supervised pre-training tasks --...