[2507.03267] GDGB: A Benchmark for Generative Dynamic Text-Attributed Graph Learning

[2507.03267] GDGB: A Benchmark for Generative Dynamic Text-Attributed Graph Learning

arXiv - AI 4 min read Article

Summary

The paper presents GDGB, a benchmark for Generative Dynamic Text-Attributed Graph Learning, addressing the limitations of existing datasets and proposing new evaluation metrics and tasks.

Why It Matters

This research is significant as it establishes a standardized framework for evaluating generative tasks in dynamic text-attributed graphs, which are essential for modeling complex systems. By improving dataset quality and introducing novel tasks, it enhances the applicability of generative models in real-world scenarios.

Key Takeaways

  • GDGB introduces high-quality datasets for dynamic text-attributed graphs.
  • Two new tasks, TDGG and IDGG, are defined for generative graph learning.
  • Multifaceted evaluation metrics assess the quality of generated graphs.
  • The framework supports reproducibility and robustness in benchmarking.
  • Findings highlight the importance of structural and textual features in graph generation.

Computer Science > Artificial Intelligence arXiv:2507.03267 (cs) [Submitted on 4 Jul 2025 (v1), last revised 18 Feb 2026 (this version, v2)] Title:GDGB: A Benchmark for Generative Dynamic Text-Attributed Graph Learning Authors:Jie Peng, Jiarui Ji, Runlin Lei, Zhewei Wei, Yongchao Liu, Chuntao Hong View a PDF of the paper titled GDGB: A Benchmark for Generative Dynamic Text-Attributed Graph Learning, by Jie Peng and 5 other authors View PDF HTML (experimental) Abstract:Dynamic Text-Attributed Graphs (DyTAGs), which intricately integrate structural, temporal, and textual attributes, are crucial for modeling complex real-world systems. However, most existing DyTAG datasets exhibit poor textual quality, which severely limits their utility for generative DyTAG tasks requiring semantically rich inputs. Additionally, prior work mainly focuses on discriminative tasks on DyTAGs, resulting in a lack of standardized task formulations and evaluation protocols tailored for DyTAG generation. To address these critical issues, we propose Generative DyTAG Benchmark (GDGB), which comprises eight meticulously curated DyTAG datasets with high-quality textual features for both nodes and edges, overcoming limitations of prior datasets. Building on GDGB, we define two novel DyTAG generation tasks: Transductive Dynamic Graph Generation (TDGG) and Inductive Dynamic Graph Generation (IDGG). TDGG transductively generates a target DyTAG based on the given source and destination node sets, while the m...

Related Articles

UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Llms

[R] Looking for arXiv cs.LG endorser, inference monitoring using information geometry

Hi r/MachineLearning, I’m looking for an arXiv endorser in cs.LG for a paper on inference-time distribution shift detection for deployed ...

Reddit - Machine Learning · 1 min ·
Top 10 AI certifications and courses for 2026
Ai Startups

Top 10 AI certifications and courses for 2026

This article reviews the top 10 AI certifications and courses for 2026, highlighting their significance in a rapidly evolving field and t...

AI Events · 15 min ·
Machine Learning

[P] MCGrad: fix calibration of your ML model in subgroups

Hi r/MachineLearning, We’re open-sourcing MCGrad, a Python package for multicalibration–developed and deployed in production at Meta. Thi...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime