[2508.07117] From Nodes to Narratives: Explaining Graph Neural Networks with LLMs and Graph Context
About this article
Abstract page for arXiv paper 2508.07117: From Nodes to Narratives: Explaining Graph Neural Networks with LLMs and Graph Context
Computer Science > Machine Learning arXiv:2508.07117 (cs) [Submitted on 9 Aug 2025 (v1), last revised 23 Mar 2026 (this version, v2)] Title:From Nodes to Narratives: Explaining Graph Neural Networks with LLMs and Graph Context Authors:Peyman Baghershahi, Gregoire Fournier, Pranav Nyati, Sourav Medya View a PDF of the paper titled From Nodes to Narratives: Explaining Graph Neural Networks with LLMs and Graph Context, by Peyman Baghershahi and 3 other authors View PDF HTML (experimental) Abstract:Graph Neural Networks (GNNs) have emerged as powerful tools for learning over structured data, including text-attributed graphs (TAGs), which are common in domains such as citation networks, social platforms, and knowledge graphs. GNNs are not inherently interpretable and thus, many explanation methods have been proposed. However, existing explanation methods often struggle to generate interpretable, fine-grained rationales, especially when node attributes include rich natural language. In this work, we introduce GSPELL, a lightweight, post-hoc framework that uses large language models (LLMs) to generate faithful and interpretable explanations for GNN predictions. GSPELL projects GNN node embeddings into the LLM embedding space and constructs hybrid prompts that interleave soft prompts with textual inputs from the graph structure. This enables the LLM to reason about GNN internal representations and produce natural language explanations along with concise explanation subgraphs. Our ...