[2602.13634] Optimization-Free Graph Embedding via Distributional Kernel for Community Detection

[2602.13634] Optimization-Free Graph Embedding via Distributional Kernel for Community Detection

arXiv - Machine Learning 3 min read Article

Summary

This article presents a novel method for graph embedding that addresses over-smoothing in Neighborhood Aggregation Strategy (NAS) methods, enhancing community detection without requiring optimization.

Why It Matters

The findings are significant as they tackle a common limitation in graph embedding techniques, which can hinder the performance of machine learning models in community detection tasks. By introducing a distribution-aware kernel, the research offers a new approach that maintains node distinguishability and improves the effectiveness of existing methods.

Key Takeaways

  • Introduces a weighted distribution-aware kernel for graph embedding.
  • Addresses the over-smoothing issue prevalent in NAS methods.
  • No optimization is required, simplifying the embedding process.
  • Demonstrates superior performance in community detection via spectral clustering.
  • Incorporates critical node distribution characteristics often overlooked.

Computer Science > Machine Learning arXiv:2602.13634 (cs) [Submitted on 14 Feb 2026] Title:Optimization-Free Graph Embedding via Distributional Kernel for Community Detection Authors:Shuaibin Song, Kai Ming Ting, Kaifeng Zhang, Tianrun Liang View a PDF of the paper titled Optimization-Free Graph Embedding via Distributional Kernel for Community Detection, by Shuaibin Song and 3 other authors View PDF HTML (experimental) Abstract:Neighborhood Aggregation Strategy (NAS) is a widely used approach in graph embedding, underpinning both Graph Neural Networks (GNNs) and Weisfeiler-Lehman (WL) methods. However, NAS-based methods are identified to be prone to over-smoothing-the loss of node distinguishability with increased iterations-thereby limiting their effectiveness. This paper identifies two characteristics in a network, i.e., the distributions of nodes and node degrees that are critical for expressive representation but have been overlooked in existing methods. We show that these overlooked characteristics contribute significantly to over-smoothing of NAS-methods. To address this, we propose a novel weighted distribution-aware kernel that embeds nodes while taking their distributional characteristics into consideration. Our method has three distinguishing features: (1) it is the first method to explicitly incorporate both distributional characteristics; (2) it requires no optimization; and (3) it effectively mitigates the adverse effects of over-smoothing, allowing WL to pre...

Related Articles

Llms

[D] How's MLX and jax/ pytorch on MacBooks these days?

​ So I'm looking at buying a new 14 inch MacBook pro with m5 pro and 64 gb of memory vs m4 max with same specs. My priorities are pro sof...

Reddit - Machine Learning · 1 min ·
Llms

[R] 94.42% on BANKING77 Official Test Split with Lightweight Embedding + Example Reranking (strict full-train protocol)

BANKING77 (77 fine-grained banking intents) is a well-established but increasingly saturated intent classification benchmark. did this wh...

Reddit - Machine Learning · 1 min ·
As Meta Flounders, It Reportedly Plans to Open Source Its New AI Models
Machine Learning

As Meta Flounders, It Reportedly Plans to Open Source Its New AI Models

At least if it sucks, everyone will be able to see why.

AI Tools & Products · 5 min ·
Google quietly launched an AI dictation app that works offline
Machine Learning

Google quietly launched an AI dictation app that works offline

Google's new offline-first dictation app uses Gemma AI models to take on the apps like Wispr Flow.

TechCrunch - AI · 4 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime