[2602.21092] Probing Graph Neural Network Activation Patterns Through Graph Topology
Summary
This article explores the relationship between graph topology and activation patterns in Graph Neural Networks (GNNs), revealing insights into the challenges of information flow and activation concentration in GNNs.
Why It Matters
Understanding how graph topology affects GNN performance is crucial for improving model design and addressing issues like oversmoothing and oversquashing. This research provides a framework for diagnosing failures in graph learning, which is essential for advancing applications in machine learning and artificial intelligence.
Key Takeaways
- Graph topology significantly influences GNN activation patterns.
- Massive Activations do not concentrate on curvature extremes as expected.
- Global attention mechanisms can exacerbate topological bottlenecks.
- Curvature can serve as a diagnostic tool for GNN performance.
- The study highlights the need for better understanding of information flow in GNNs.
Computer Science > Machine Learning arXiv:2602.21092 (cs) [Submitted on 24 Feb 2026] Title:Probing Graph Neural Network Activation Patterns Through Graph Topology Authors:Floriano Tori, Lorenzo Bini, Marco Sorbi, Stéphane Marchand-Maillet, Vincent Ginis View a PDF of the paper titled Probing Graph Neural Network Activation Patterns Through Graph Topology, by Floriano Tori and 4 other authors View PDF HTML (experimental) Abstract:Curvature notions on graphs provide a theoretical description of graph topology, highlighting bottlenecks and denser connected regions. Artifacts of the message passing paradigm in Graph Neural Networks, such as oversmoothing and oversquashing, have been attributed to these regions. However, it remains unclear how the topology of a graph interacts with the learned preferences of GNNs. Through Massive Activations, which correspond to extreme edge activation values in Graph Transformers, we probe this correspondence. Our findings on synthetic graphs and molecular benchmarks reveal that MAs do not preferentially concentrate on curvature extremes, despite their theoretical link to information flow. On the Long Range Graph Benchmark, we identify a systemic \textit{curvature shift}: global attention mechanisms exacerbate topological bottlenecks, drastically increasing the prevalence of negative curvature. Our work reframes curvature as a diagnostic probe for understanding when and why graph learning fails. Subjects: Machine Learning (cs.LG); Artificial I...