[2301.12230] Continual Graph Learning: A Survey
About this article
Abstract page for arXiv paper 2301.12230: Continual Graph Learning: A Survey
Computer Science > Machine Learning arXiv:2301.12230 (cs) [Submitted on 28 Jan 2023 (v1), last revised 29 Mar 2026 (this version, v2)] Title:Continual Graph Learning: A Survey Authors:Qiao Yuan, Sheng-Uei Guan, Pin Ni, Tianlun Luo, Ka Lok Man, Prudence Wong, Victor Chang View a PDF of the paper titled Continual Graph Learning: A Survey, by Qiao Yuan and 6 other authors View PDF HTML (experimental) Abstract:Continual Graph Learning (CGL) enables models to incrementally learn from streaming graph-structured data without forgetting previously acquired knowledge. Experience replay is a common solution that reuses a subset of past samples during training. However, it may lead to information loss and privacy risks. Generative replay addresses these concerns by synthesizing informative subgraphs for rehearsal. Existing generative replay approaches often rely on graph condensation via distribution matching, which faces two key challenges: (1) the use of random feature encodings may fail to capture the characteristic kernel of the discrepancy metric, weakening distribution alignment; and (2) matching over a fixed small subgraph cannot guarantee low risk on previous tasks, as indicated by domain adaptation theory. To overcome these limitations, we propose an Adversarial Condensation based Generative Replay (ACGR) framwork. It reformulates graph condensation as a min-max optimization problem to achieve better distribution matching. Moreover, instead of learning a single subgraph, we ...