[2505.08021] The Correspondence Between Bounded Graph Neural Networks and Fragments of First-Order Logic
Summary
This paper explores the relationship between Bounded Graph Neural Networks (GNNs) and fragments of first-order logic, providing insights into their expressive power and applicability in graph representation learning.
Why It Matters
Understanding the correspondence between GNNs and first-order logic enhances our grasp of their capabilities, which is crucial for advancing AI applications that rely on graph-structured data. This research contributes to the theoretical foundation of GNNs and their logical expressiveness.
Key Takeaways
- GNNs can be precisely mapped to fragments of first-order logic.
- The study introduces new GNN architectures that align with modal logics.
- Findings offer a framework for assessing the logical expressiveness of GNNs.
- Research methods from finite model theory are applied to graph representation.
- The results have implications for improving AI applications in graph data.
Computer Science > Artificial Intelligence arXiv:2505.08021 (cs) [Submitted on 12 May 2025 (v1), last revised 19 Feb 2026 (this version, v4)] Title:The Correspondence Between Bounded Graph Neural Networks and Fragments of First-Order Logic Authors:Bernardo Cuenca Grau, Eva Feng, Przemysław Andrzej Wałęga View a PDF of the paper titled The Correspondence Between Bounded Graph Neural Networks and Fragments of First-Order Logic, by Bernardo Cuenca Grau and 2 other authors View PDF Abstract:Graph Neural Networks (GNNs) address two key challenges in applying deep learning to graph-structured data: they handle varying size input graphs and ensure invariance under graph isomorphism. While GNNs have demonstrated broad applicability, understanding their expressive power remains an important question. In this paper, we propose GNN architectures that correspond precisely to prominent fragments of first-order logic (FO), including various modal logics as well as more expressive two-variable fragments. To establish these results, we apply methods from finite model theory of first-order and modal logics to the domain of graph representation learning. Our results provide a unifying framework for understanding the logical expressiveness of GNNs within FO. Comments: Subjects: Artificial Intelligence (cs.AI) Cite as: arXiv:2505.08021 [cs.AI] (or arXiv:2505.08021v4 [cs.AI] for this version) https://doi.org/10.48550/arXiv.2505.08021 Focus to learn more arXiv-issued DOI via DataCite Submis...