[2603.24304] CGRL: Causal-Guided Representation Learning for Graph Out-of-Distribution Generalization

[2603.24304] CGRL: Causal-Guided Representation Learning for Graph Out-of-Distribution Generalization

arXiv - Machine Learning 3 min read

About this article

Abstract page for arXiv paper 2603.24304: CGRL: Causal-Guided Representation Learning for Graph Out-of-Distribution Generalization

Statistics > Machine Learning arXiv:2603.24304 (stat) [Submitted on 25 Mar 2026] Title:CGRL: Causal-Guided Representation Learning for Graph Out-of-Distribution Generalization Authors:Bowen Lu, Liangqiang Yang, Teng Li View a PDF of the paper titled CGRL: Causal-Guided Representation Learning for Graph Out-of-Distribution Generalization, by Bowen Lu and 2 other authors View PDF Abstract:Graph Neural Networks (GNNs) have achieved impressive performance in graph-related tasks. However, they suffer from poor generalization on out-of-distribution (OOD) data, as they tend to learn spurious correlations. Such correlations present a phenomenon that GNNs fail to stably learn the mutual information between prediction representations and ground-truth labels under OOD settings. To address these challenges, we formulate a causal graph starting from the essence of node classification, adopt backdoor adjustment to block non-causal paths, and theoretically derive a lower bound for improving OOD generalization of GNNs. To materialize these insights, we further propose a novel approach integrating causal representation learning and a loss replacement strategy. The former captures node-level causal invariance and reconstructs graph posterior distribution. The latter introduces asymptotic losses of the same order to replace the original losses. Extensive experiments demonstrate the superiority of our method in OOD generalization and effectively alleviating the phenomenon of unstable mutual i...

Originally published on March 26, 2026. Curated by AI News.

Related Articles

Llms

[R] Depth-first pruning transfers: GPT-2 → TinyLlama with stable gains and minimal loss

TL;DR: Removing the right layers (instead of shrinking all layers) makes transformer models ~8–12% smaller with only ~6–8% quality loss, ...

Reddit - Machine Learning · 1 min ·
Llms

Built a training stability monitor that detects instability before your loss curve shows anything — open sourced the core today

Been working on a weight divergence trajectory curvature approach to detecting neural network training instability. Treats weight updates...

Reddit - Artificial Intelligence · 1 min ·
UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Improving AI models’ ability to explain their predictions
Machine Learning

Improving AI models’ ability to explain their predictions

AI News - General · 9 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime