[2602.21342] Archetypal Graph Generative Models: Explainable and Identifiable Communities via Anchor-Dominant Convex Hulls

[2602.21342] Archetypal Graph Generative Models: Explainable and Identifiable Communities via Anchor-Dominant Convex Hulls

arXiv - Machine Learning 4 min read Article

Summary

The paper introduces GraphHull, an explainable generative model for graph representation learning, enhancing community detection and link prediction through a two-level convex hull structure.

Why It Matters

This research addresses the critical need for explainability in machine learning models, particularly in graph-based tasks. By providing interpretable predictions and recovering multi-level community structures, GraphHull contributes to more transparent AI systems, which is essential for trust and accountability in AI applications.

Key Takeaways

  • GraphHull utilizes a two-level convex hull structure for graph representation.
  • The model enhances explainability in community detection and link prediction tasks.
  • Experiments show competitive performance compared to existing methods.
  • Local and global archetypes provide clear multi-scale explanations.
  • Incorporation of principled priors ensures model stability and diversity.

Computer Science > Machine Learning arXiv:2602.21342 (cs) [Submitted on 24 Feb 2026] Title:Archetypal Graph Generative Models: Explainable and Identifiable Communities via Anchor-Dominant Convex Hulls Authors:Nikolaos Nakis, Chrysoula Kosma, Panagiotis Promponas, Michail Chatzianastasis, Giannis Nikolentzos View a PDF of the paper titled Archetypal Graph Generative Models: Explainable and Identifiable Communities via Anchor-Dominant Convex Hulls, by Nikolaos Nakis and 4 other authors View PDF HTML (experimental) Abstract:Representation learning has been essential for graph machine learning tasks such as link prediction, community detection, and network visualization. Despite recent advances in achieving high performance on these downstream tasks, little progress has been made toward self-explainable models. Understanding the patterns behind predictions is equally important, motivating recent interest in explainable machine learning. In this paper, we present GraphHull, an explainable generative model that represents networks using two levels of convex hulls. At the global level, the vertices of a convex hull are treated as archetypes, each corresponding to a pure community in the network. At the local level, each community is refined by a prototypical hull whose vertices act as representative profiles, capturing community-specific variation. This two-level construction yields clear multi-scale explanations: a node's position relative to global archetypes and its local prot...

Related Articles

UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Improving AI models’ ability to explain their predictions
Machine Learning

Improving AI models’ ability to explain their predictions

AI News - General · 9 min ·
Llms

LLM agents can trigger real actions now. But what actually stops them from executing?

We ran into a simple but important issue while building agents with tool calling: the model can propose actions but nothing actually enfo...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

OkCupid gave 3 million dating-app photos to facial recognition firm, FTC says

submitted by /u/Mathemodel [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime