[2603.22984] Can Graph Foundation Models Generalize Over Architecture?
About this article
Abstract page for arXiv paper 2603.22984: Can Graph Foundation Models Generalize Over Architecture?
Computer Science > Machine Learning arXiv:2603.22984 (cs) [Submitted on 24 Mar 2026] Title:Can Graph Foundation Models Generalize Over Architecture? Authors:Benjamin Gutteridge, Michael Bronstein, Xiaowen Dong View a PDF of the paper titled Can Graph Foundation Models Generalize Over Architecture?, by Benjamin Gutteridge and 2 other authors View PDF HTML (experimental) Abstract:Graph foundation models (GFMs) have recently attracted interest due to the promise of graph neural network (GNN) architectures that generalize zero-shot across graphs of arbitrary scales, feature dimensions, and domains. While existing work has demonstrated this ability empirically across diverse real-world benchmarks, these tasks share a crucial hidden limitation: they admit a narrow set of effective GNN architectures. In particular, current domain-agnostic GFMs rely on fixed architectural backbones, implicitly assuming that a single message-passing regime suffices across tasks. In this paper, we argue that architecture adaptivity is a necessary requirement for true GFMs. We show that existing approaches are non-robust to task-dependent architectural attributes and, as a case study, use range as a minimal and measurable axis along which this limitation becomes explicit. With theoretical analysis and controlled synthetic experiments, we demonstrate that fixed-backbone GFMs provably under-reach on tasks whose architectural requirements differ from those seen at training time. To address this issue, w...