[2602.18141] Advection-Diffusion on Graphs: A Bakry-Emery Laplacian for Spectral Graph Neural Networks
Summary
The paper introduces a Bakry-Emery Laplacian for Graph Neural Networks (GNNs), enhancing information propagation without altering graph structure, and presents the mu-ChebNet architecture for improved spectral learning.
Why It Matters
This research addresses common challenges in GNNs, such as oversmoothing and oversquashing, by proposing a novel approach that maintains graph topology while improving performance on long-range reasoning tasks. The findings could significantly impact the design of future GNN architectures.
Key Takeaways
- The Bakry-Emery Laplacian enables task-dependent propagation dynamics in GNNs.
- mu-ChebNet architecture effectively combines message-passing and spectral efficiency.
- The proposed method shows consistent performance improvements on various benchmarks.
- The approach allows for an interpretable routing field, enhancing understanding of information flow.
- This research provides a foundation for adaptive spectral graph learning.
Computer Science > Machine Learning arXiv:2602.18141 (cs) [Submitted on 20 Feb 2026] Title:Advection-Diffusion on Graphs: A Bakry-Emery Laplacian for Spectral Graph Neural Networks Authors:Pierre-Gabriel Berlureau, Ali Hariri, Victor Kawasaki-Borruat, Mia Zosso, Pierre Vandergheynst View a PDF of the paper titled Advection-Diffusion on Graphs: A Bakry-Emery Laplacian for Spectral Graph Neural Networks, by Pierre-Gabriel Berlureau and 4 other authors View PDF HTML (experimental) Abstract:Graph Neural Networks (GNNs) often struggle to propagate information across long distances due to oversmoothing and oversquashing. Existing remedies such as graph transformers or rewiring typically incur high computational cost or require altering the graph structure. We introduce a Bakry-Emery graph Laplacian that integrates diffusion and advection through a learnable node-wise potential, inducing task-dependent propagation dynamics without modifying topology. This operator has a well-behaved spectral decomposition and acts as a drop-in replacement for standard Laplacians in spectral GNNs. Building on this insight, we develop mu-ChebNet, a spectral architecture that jointly learns the potential and Chebyshev filters, effectively bridging message-passing adaptivity and spectral efficiency. Our theoretical analysis shows how the potential modulates the spectrum, enabling control of key graph properties. Empirically, mu-ChebNet delivers consistent gains on synthetic long-range reasoning tasks...