[2603.05395] On the Necessity of Learnable Sheaf Laplacians
About this article
Abstract page for arXiv paper 2603.05395: On the Necessity of Learnable Sheaf Laplacians
Computer Science > Machine Learning arXiv:2603.05395 (cs) [Submitted on 5 Mar 2026] Title:On the Necessity of Learnable Sheaf Laplacians Authors:Ferran Hernandez Caralt, Mar Gonzàlez i Català, Adrián Bazaga, Pietro Liò View a PDF of the paper titled On the Necessity of Learnable Sheaf Laplacians, by Ferran Hernandez Caralt and 3 other authors View PDF HTML (experimental) Abstract:Sheaf Neural Networks (SNNs) were introduced as an extension of Graph Convolutional Networks to address oversmoothing on heterophilous graphs by attaching a sheaf to the input graph and replacing the adjacency-based operator with a sheaf Laplacian defined by (learnable) restriction maps. Prior work motivates this design through theoretical properties of sheaf diffusion and the kernel of the sheaf Laplacian, suggesting that suitable non-identity restriction maps can avoid representations converging to constants across connected components. Since oversmoothing can also be mitigated through residual connections and normalization, we revisit a trivial sheaf construction to ask whether the additional complexity of learning restriction maps is necessary. We introduce an Identity Sheaf Network baseline, where all restriction maps are fixed to the identity, and use it to ablate the empirical improvements reported by sheaf-learning architectures. Across five popular heterophilic benchmarks, the identity baseline achieves comparable performance to a range of SNN variants. Finally, we introduce the Rayleigh ...