[2508.03111] GEDAN: Learning the Edit Costs for Graph Edit Distance

[2508.03111] GEDAN: Learning the Edit Costs for Graph Edit Distance

arXiv - AI 4 min read Article

Summary

The paper presents GEDAN, a novel Graph Neural Network framework that learns edit costs for Graph Edit Distance (GED), addressing limitations of existing methods by aligning topological and task-specific similarities.

Why It Matters

This research is significant as it tackles the NP-hard problem of Graph Edit Distance, which is crucial for various applications, including molecular analysis. By improving the accuracy of edit cost learning, it enhances the interpretability and effectiveness of graph matching in real-world scenarios.

Key Takeaways

  • GEDAN offers an end-to-end approach to learning edit costs for Graph Edit Distance.
  • The method combines self-organizing mechanisms with Generalized Additive Models for contextualized learning.
  • Experiments show GEDAN's superiority over non-end-to-end methods in graph matching tasks.
  • The framework reveals meaningful structures in complex graphs, enhancing interpretability.
  • Applications extend to fields such as molecular analysis, showcasing its practical relevance.

Computer Science > Machine Learning arXiv:2508.03111 (cs) [Submitted on 5 Aug 2025 (v1), last revised 22 Feb 2026 (this version, v3)] Title:GEDAN: Learning the Edit Costs for Graph Edit Distance Authors:Francesco Leonardi, Markus Orsi, Jean-Louis Reymond, Kaspar Riesen View a PDF of the paper titled GEDAN: Learning the Edit Costs for Graph Edit Distance, by Francesco Leonardi and 3 other authors View PDF HTML (experimental) Abstract:Graph Edit Distance (GED) is defined as the minimum cost transformation of one graph into another and is a widely adopted metric for measuring the dissimilarity between graphs. The major problem of GED is that its computation is NP-hard, which has in turn led to the development of various approximation methods, including approaches based on neural networks (NN). However, most NN methods assume a unit cost for edit operations -- a restrictive and often unrealistic simplification, since topological and functional distances rarely coincide in real-world data. In this paper, we propose a fully end-to-end Graph Neural Network framework for learning the edit costs for GED, at a fine-grained level, aligning topological and task-specific similarity. Our method combines an unsupervised self-organizing mechanism for GED approximation with a Generalized Additive Model that flexibly learns contextualized edit costs. Experiments demonstrate that our approach overcomes the limitations of non-end-to-end methods, yielding directly interpretable graph matchings...

Related Articles

Machine Learning

[D] Offering licensed Indian language speech datasets (with explicit contributor consent)

Hi everyone, I run a small data initiative where we collect speech datasets in multiple Indian languages directly from contributors who p...

Reddit - Machine Learning · 1 min ·
UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Llms

[R] Looking for arXiv cs.LG endorser, inference monitoring using information geometry

Hi r/MachineLearning, I’m looking for an arXiv endorser in cs.LG for a paper on inference-time distribution shift detection for deployed ...

Reddit - Machine Learning · 1 min ·
Top 10 AI certifications and courses for 2026
Ai Startups

Top 10 AI certifications and courses for 2026

This article reviews the top 10 AI certifications and courses for 2026, highlighting their significance in a rapidly evolving field and t...

AI Events · 15 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime