[2602.23135] DyGnROLE: Modeling Asymmetry in Dynamic Graphs with Node-Role-Oriented Latent Encoding

[2602.23135] DyGnROLE: Modeling Asymmetry in Dynamic Graphs with Node-Role-Oriented Latent Encoding

arXiv - AI 3 min read Article

Summary

The paper presents DyGnROLE, a transformer-based model for dynamic graphs that distinguishes between source and destination nodes to improve edge classification performance.

Why It Matters

Dynamic graphs are prevalent in various fields, and understanding their asymmetrical behaviors is crucial for accurate modeling. DyGnROLE's innovative approach to role-aware modeling addresses limitations of existing architectures, potentially advancing research in machine learning and AI applications.

Key Takeaways

  • DyGnROLE uses separate embeddings for source and destination nodes to capture unique structural contexts.
  • Introduces a self-supervised pretraining objective, Temporal Contrastive Link Prediction (TCLP), enhancing model performance with unlabeled data.
  • Demonstrates significant improvements in edge classification over existing state-of-the-art models.

Computer Science > Machine Learning arXiv:2602.23135 (cs) [Submitted on 26 Feb 2026] Title:DyGnROLE: Modeling Asymmetry in Dynamic Graphs with Node-Role-Oriented Latent Encoding Authors:Tyler Bonnet, Marek Rei View a PDF of the paper titled DyGnROLE: Modeling Asymmetry in Dynamic Graphs with Node-Role-Oriented Latent Encoding, by Tyler Bonnet and 1 other authors View PDF HTML (experimental) Abstract:Real-world dynamic graphs are often directed, with source and destination nodes exhibiting asymmetrical behavioral patterns and temporal dynamics. However, existing dynamic graph architectures largely rely on shared parameters for processing source and destination nodes, with limited or no systematic role-aware modeling. We propose DyGnROLE (Dynamic Graph Node-Role-Oriented Latent Encoding), a transformer-based architecture that explicitly disentangles source and destination representations. By using separate embedding vocabularies and role-semantic positional encodings, the model captures the distinct structural and temporal contexts unique to each role. Critical to the effectiveness of these specialized embeddings in low-label regimes is a self-supervised pretraining objective we introduce: Temporal Contrastive Link Prediction (TCLP). The pretraining uses the full unlabeled interaction history to encode informative structural biases, enabling the model to learn role-specific representations without requiring annotated data. Evaluation on future edge classification demonstrate...

Related Articles

[2603.16790] InCoder-32B: Code Foundation Model for Industrial Scenarios
Llms

[2603.16790] InCoder-32B: Code Foundation Model for Industrial Scenarios

Abstract page for arXiv paper 2603.16790: InCoder-32B: Code Foundation Model for Industrial Scenarios

arXiv - AI · 4 min ·
[2603.16430] EngGPT2: Sovereign, Efficient and Open Intelligence
Llms

[2603.16430] EngGPT2: Sovereign, Efficient and Open Intelligence

Abstract page for arXiv paper 2603.16430: EngGPT2: Sovereign, Efficient and Open Intelligence

arXiv - AI · 4 min ·
[2603.13846] Is Seeing Believing? Evaluating Human Sensitivity to Synthetic Video
Machine Learning

[2603.13846] Is Seeing Believing? Evaluating Human Sensitivity to Synthetic Video

Abstract page for arXiv paper 2603.13846: Is Seeing Believing? Evaluating Human Sensitivity to Synthetic Video

arXiv - AI · 3 min ·
[2603.13294] Real-World AI Evaluation: How FRAME Generates Systematic Evidence to Resolve the Decision-Maker's Dilemma
Machine Learning

[2603.13294] Real-World AI Evaluation: How FRAME Generates Systematic Evidence to Resolve the Decision-Maker's Dilemma

Abstract page for arXiv paper 2603.13294: Real-World AI Evaluation: How FRAME Generates Systematic Evidence to Resolve the Decision-Maker...

arXiv - AI · 4 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime