[2602.17907] Improving Neural Topic Modeling with Semantically-Grounded Soft Label Distributions

[2602.17907] Improving Neural Topic Modeling with Semantically-Grounded Soft Label Distributions

arXiv - AI 3 min read Article

Summary

This paper presents a novel approach to neural topic modeling by using semantically-grounded soft label distributions, enhancing topic coherence and retrieval effectiveness.

Why It Matters

The research addresses limitations in traditional neural topic models, such as data sparsity and lack of contextual information. By leveraging language models to create enriched supervision signals, this work has implications for improving topic modeling in various applications, including document retrieval and thematic analysis.

Key Takeaways

  • Introduces a method for constructing semantically-grounded soft labels using language models.
  • Demonstrates improved topic coherence and purity compared to existing methods.
  • Presents a new retrieval-based metric for evaluating topic models.
  • Shows effectiveness in identifying semantically similar documents.
  • Highlights potential applications in retrieval-oriented tasks.

Computer Science > Computation and Language arXiv:2602.17907 (cs) [Submitted on 20 Feb 2026] Title:Improving Neural Topic Modeling with Semantically-Grounded Soft Label Distributions Authors:Raymond Li, Amirhossein Abaskohi, Chuyuan Li, Gabriel Murray, Giuseppe Carenini View a PDF of the paper titled Improving Neural Topic Modeling with Semantically-Grounded Soft Label Distributions, by Raymond Li and 4 other authors View PDF HTML (experimental) Abstract:Traditional neural topic models are typically optimized by reconstructing the document's Bag-of-Words (BoW) representations, overlooking contextual information and struggling with data sparsity. In this work, we propose a novel approach to construct semantically-grounded soft label targets using Language Models (LMs) by projecting the next token probabilities, conditioned on a specialized prompt, onto a pre-defined vocabulary to obtain contextually enriched supervision signals. By training the topic models to reconstruct the soft labels using the LM hidden states, our method produces higher-quality topics that are more closely aligned with the underlying thematic structure of the corpus. Experiments on three datasets show that our method achieves substantial improvements in topic coherence, purity over existing baselines. Additionally, we also introduce a retrieval-based metric, which shows that our approach significantly outperforms existing methods in identifying semantically similar documents, highlighting its effective...

Related Articles

Llms

I think we’re about to have a new kind of “SEO”… and nobody is talking about it.

More people are asking ChatGPT things like: “what’s the best CRM?” “is this tool worth it?” “alternatives to X” And they just… trust the ...

Reddit - Artificial Intelligence · 1 min ·
Llms

Why would Claude give me the same response over and over and give others different replies?

I asked Claude to "generate me a random word" so I could do some word play. Then I asked it again in a new prompt window on desktop after...

Reddit - Artificial Intelligence · 1 min ·
Anthropic blocks OpenClaw from Claude subscriptions
Llms

Anthropic blocks OpenClaw from Claude subscriptions

Anthropic forces pay-as-you-go pricing for OpenClaw users after creator joins OpenAI

AI Tools & Products · 6 min ·
Llms

wtf bro did what? arc 3 2026

The Physarum Explorer is a high-speed, bio-inspired neural model designed specifically for ARC geometry. Here is the snapshot of its curr...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime