[2603.26140] On the Complexity of Optimal Graph Rewiring for Oversmoothing and Oversquashing in Graph Neural Networks

[2603.26140] On the Complexity of Optimal Graph Rewiring for Oversmoothing and Oversquashing in Graph Neural Networks

arXiv - AI 3 min read

About this article

Abstract page for arXiv paper 2603.26140: On the Complexity of Optimal Graph Rewiring for Oversmoothing and Oversquashing in Graph Neural Networks

Computer Science > Machine Learning arXiv:2603.26140 (cs) [Submitted on 27 Mar 2026] Title:On the Complexity of Optimal Graph Rewiring for Oversmoothing and Oversquashing in Graph Neural Networks Authors:Mostafa Haghir Chehreghani View a PDF of the paper titled On the Complexity of Optimal Graph Rewiring for Oversmoothing and Oversquashing in Graph Neural Networks, by Mostafa Haghir Chehreghani View PDF HTML (experimental) Abstract:Graph Neural Networks (GNNs) face two fundamental challenges when scaled to deep architectures: oversmoothing, where node representations converge to indistinguishable vectors, and oversquashing, where information from distant nodes fails to propagate through bottlenecks. Both phenomena are intimately tied to the underlying graph structure, raising a natural question: can we optimize the graph topology to mitigate these issues? This paper provides a theoretical investigation of the computational complexity of such graph structure optimization. We formulate oversmoothing and oversquashing mitigation as graph optimization problems based on spectral gap and conductance, respectively. We prove that exact optimization for either problem is NP-hard through reductions from Minimum Bisection, establishing NP-completeness of the decision versions. Our results provide theoretical foundations for understanding the fundamental limits of graph rewiring for GNN optimization and justify the use of approximation algorithms and heuristic methods in practice. Sub...

Originally published on March 30, 2026. Curated by AI News.

Related Articles

Machine Learning

[D] I had an idea, would love your thoughts

What happens that while training an AI during pre training we make it such that if makes "misaligned behaviour" then we just reduce like ...

Reddit - Machine Learning · 1 min ·
Machine Learning

I had an idea, would love your thoughts

What happens that while training an AI during pre training we make it such that if makes "misaligned behaviour" then we just reduce like ...

Reddit - Artificial Intelligence · 1 min ·
AI benchmarks are broken. Here’s what we need instead. | MIT Technology Review
Machine Learning

AI benchmarks are broken. Here’s what we need instead. | MIT Technology Review

One-off tests don’t measure AI’s true impact. We’re better off shifting to more human-centered, context-specific methods.

MIT Technology Review · 8 min ·
Machine Learning

[D] How does distributed proof of work computing handle the coordination needs of neural network training?

[D] Ive been trying to understand the technical setup of a project called Qubic. It claims to use distributed proof of work computing for...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime