[2602.13106] Which Algorithms Can Graph Neural Networks Learn?
Summary
This paper explores the capabilities of graph neural networks (GNNs) in learning discrete algorithms, proposing a theoretical framework for their generalization beyond finite training sets.
Why It Matters
Understanding how graph neural networks can learn algorithms is crucial for advancing neural algorithmic reasoning. This research provides insights into the limitations and potentials of MPNNs, which can inform future developments in machine learning and AI applications.
Key Takeaways
- Graph neural networks can learn certain algorithms under specific conditions.
- The study presents a theoretical framework for generalizing MPNN capabilities.
- Impossibility results highlight the limitations of standard MPNNs for various tasks.
- More expressive MPNN architectures can overcome these limitations.
- Empirical results support the theoretical findings, enhancing understanding of algorithmic learning.
Computer Science > Machine Learning arXiv:2602.13106 (cs) [Submitted on 13 Feb 2026] Title:Which Algorithms Can Graph Neural Networks Learn? Authors:Solveig Wittig, Antonis Vasileiou, Robert R. Nerem, Timo Stoll, Floris Geerts, Yusu Wang, Christopher Morris View a PDF of the paper titled Which Algorithms Can Graph Neural Networks Learn?, by Solveig Wittig and 6 other authors View PDF Abstract:In recent years, there has been growing interest in understanding neural architectures' ability to learn to execute discrete algorithms, a line of work often referred to as neural algorithmic reasoning. The goal is to integrate algorithmic reasoning capabilities into larger neural pipelines. Many such architectures are based on (message-passing) graph neural networks (MPNNs), owing to their permutation equivariance and ability to deal with sparsity and variable-sized inputs. However, existing work is either largely empirical and lacks formal guarantees or it focuses solely on expressivity, leaving open the question of when and how such architectures generalize beyond a finite training set. In this work, we propose a general theoretical framework that characterizes the sufficient conditions under which MPNNs can learn an algorithm from a training set of small instances and provably approximate its behavior on inputs of arbitrary size. Our framework applies to a broad class of algorithms, including single-source shortest paths, minimum spanning trees, and general dynamic programming pro...