[2505.08021] The Correspondence Between Bounded Graph Neural Networks and Fragments of First-Order Logic

[2505.08021] The Correspondence Between Bounded Graph Neural Networks and Fragments of First-Order Logic

arXiv - AI 3 min read Article

Summary

This paper explores the relationship between Bounded Graph Neural Networks (GNNs) and fragments of first-order logic, providing insights into their expressive power and applicability in graph representation learning.

Why It Matters

Understanding the correspondence between GNNs and first-order logic enhances our grasp of their capabilities, which is crucial for advancing AI applications that rely on graph-structured data. This research contributes to the theoretical foundation of GNNs and their logical expressiveness.

Key Takeaways

  • GNNs can be precisely mapped to fragments of first-order logic.
  • The study introduces new GNN architectures that align with modal logics.
  • Findings offer a framework for assessing the logical expressiveness of GNNs.
  • Research methods from finite model theory are applied to graph representation.
  • The results have implications for improving AI applications in graph data.

Computer Science > Artificial Intelligence arXiv:2505.08021 (cs) [Submitted on 12 May 2025 (v1), last revised 19 Feb 2026 (this version, v4)] Title:The Correspondence Between Bounded Graph Neural Networks and Fragments of First-Order Logic Authors:Bernardo Cuenca Grau, Eva Feng, Przemysław Andrzej Wałęga View a PDF of the paper titled The Correspondence Between Bounded Graph Neural Networks and Fragments of First-Order Logic, by Bernardo Cuenca Grau and 2 other authors View PDF Abstract:Graph Neural Networks (GNNs) address two key challenges in applying deep learning to graph-structured data: they handle varying size input graphs and ensure invariance under graph isomorphism. While GNNs have demonstrated broad applicability, understanding their expressive power remains an important question. In this paper, we propose GNN architectures that correspond precisely to prominent fragments of first-order logic (FO), including various modal logics as well as more expressive two-variable fragments. To establish these results, we apply methods from finite model theory of first-order and modal logics to the domain of graph representation learning. Our results provide a unifying framework for understanding the logical expressiveness of GNNs within FO. Comments: Subjects: Artificial Intelligence (cs.AI) Cite as: arXiv:2505.08021 [cs.AI]   (or arXiv:2505.08021v4 [cs.AI] for this version)   https://doi.org/10.48550/arXiv.2505.08021 Focus to learn more arXiv-issued DOI via DataCite Submis...

Related Articles

Machine Learning

[D] ICML reviewer making up false claim in acknowledgement, what to do?

In a rebuttal acknowledgement we received, the reviewer made up a claim that our method performs worse than baselines with some hyperpara...

Reddit - Machine Learning · 1 min ·
UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Machine Learning

[D] Budget Machine Learning Hardware

Looking to get into machine learning and found this video on a piece of hardware for less than £500. Is it really possible to teach auton...

Reddit - Machine Learning · 1 min ·
Machine Learning

Your prompts aren’t the problem — something else is

I keep seeing people focus heavily on prompt optimization. But in practice, a lot of failures I’ve observed don’t come from the prompt it...

Reddit - Artificial Intelligence · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime