[2602.23135] DyGnROLE: Modeling Asymmetry in Dynamic Graphs with Node-Role-Oriented Latent Encoding
Summary
The paper presents DyGnROLE, a transformer-based model for dynamic graphs that distinguishes between source and destination nodes to improve edge classification performance.
Why It Matters
Dynamic graphs are prevalent in various fields, and understanding their asymmetrical behaviors is crucial for accurate modeling. DyGnROLE's innovative approach to role-aware modeling addresses limitations of existing architectures, potentially advancing research in machine learning and AI applications.
Key Takeaways
- DyGnROLE uses separate embeddings for source and destination nodes to capture unique structural contexts.
- Introduces a self-supervised pretraining objective, Temporal Contrastive Link Prediction (TCLP), enhancing model performance with unlabeled data.
- Demonstrates significant improvements in edge classification over existing state-of-the-art models.
Computer Science > Machine Learning arXiv:2602.23135 (cs) [Submitted on 26 Feb 2026] Title:DyGnROLE: Modeling Asymmetry in Dynamic Graphs with Node-Role-Oriented Latent Encoding Authors:Tyler Bonnet, Marek Rei View a PDF of the paper titled DyGnROLE: Modeling Asymmetry in Dynamic Graphs with Node-Role-Oriented Latent Encoding, by Tyler Bonnet and 1 other authors View PDF HTML (experimental) Abstract:Real-world dynamic graphs are often directed, with source and destination nodes exhibiting asymmetrical behavioral patterns and temporal dynamics. However, existing dynamic graph architectures largely rely on shared parameters for processing source and destination nodes, with limited or no systematic role-aware modeling. We propose DyGnROLE (Dynamic Graph Node-Role-Oriented Latent Encoding), a transformer-based architecture that explicitly disentangles source and destination representations. By using separate embedding vocabularies and role-semantic positional encodings, the model captures the distinct structural and temporal contexts unique to each role. Critical to the effectiveness of these specialized embeddings in low-label regimes is a self-supervised pretraining objective we introduce: Temporal Contrastive Link Prediction (TCLP). The pretraining uses the full unlabeled interaction history to encode informative structural biases, enabling the model to learn role-specific representations without requiring annotated data. Evaluation on future edge classification demonstrate...