[2512.14908] ATLAS: Adaptive Topology-based Learning at Scale for Homophilic and Heterophilic Graphs
Summary
The paper introduces ATLAS, a novel framework for graph neural networks that enhances performance on both homophilic and heterophilic graphs by utilizing adaptive topology-based learning without traditional message passing.
Why It Matters
This research is significant as it addresses the limitations of existing graph neural networks, particularly their scalability and performance on diverse graph types. By leveraging community features instead of message passing, ATLAS offers a more efficient and interpretable approach to graph learning, which could have broad applications in machine learning and data analysis.
Key Takeaways
- ATLAS improves graph neural network performance on both homophilic and heterophilic graphs.
- The framework eliminates the need for iterative message passing, enhancing scalability.
- Community refinement is crucial for optimizing label-community mutual information.
- ATLAS achieves significant accuracy gains over traditional methods like GCN and MLP.
- The approach allows for robust performance by adapting to the graph's structure.
Computer Science > Machine Learning arXiv:2512.14908 (cs) [Submitted on 16 Dec 2025 (v1), last revised 12 Feb 2026 (this version, v5)] Title:ATLAS: Adaptive Topology-based Learning at Scale for Homophilic and Heterophilic Graphs Authors:Turja Kundu, Sanjukta Bhowmick View a PDF of the paper titled ATLAS: Adaptive Topology-based Learning at Scale for Homophilic and Heterophilic Graphs, by Turja Kundu and Sanjukta Bhowmick View PDF HTML (experimental) Abstract:Graph neural networks (GNNs) excel on homophilic graphs where connected nodes share labels, but struggle with heterophilic graphs where edges do not imply similarity. Moreover, iterative message passing limits scalability due to neighborhood expansion overhead. We introduce ATLAS (Adaptive Topology-based Learning at Scale), a propagation-free framework that encodes graph structure through multi-resolution community features rather than message passing. We first prove that community refinement involves a fundamental trade-off: finer partitions increase label-community mutual information but also increase entropy. We formalize when refinement improves normalized mutual information, explaining why intermediate granularities are often most predictive. ATLAS employs modularity-guided adaptive search to automatically identify informative community scales, which are one-hot encoded, projected into learnable embeddings, and concatenated with node attributes for MLP classification. This enables standard mini-batch training and ...