[2602.17907] Improving Neural Topic Modeling with Semantically-Grounded Soft Label Distributions
Summary
This paper presents a novel approach to neural topic modeling by using semantically-grounded soft label distributions, enhancing topic coherence and retrieval effectiveness.
Why It Matters
The research addresses limitations in traditional neural topic models, such as data sparsity and lack of contextual information. By leveraging language models to create enriched supervision signals, this work has implications for improving topic modeling in various applications, including document retrieval and thematic analysis.
Key Takeaways
- Introduces a method for constructing semantically-grounded soft labels using language models.
- Demonstrates improved topic coherence and purity compared to existing methods.
- Presents a new retrieval-based metric for evaluating topic models.
- Shows effectiveness in identifying semantically similar documents.
- Highlights potential applications in retrieval-oriented tasks.
Computer Science > Computation and Language arXiv:2602.17907 (cs) [Submitted on 20 Feb 2026] Title:Improving Neural Topic Modeling with Semantically-Grounded Soft Label Distributions Authors:Raymond Li, Amirhossein Abaskohi, Chuyuan Li, Gabriel Murray, Giuseppe Carenini View a PDF of the paper titled Improving Neural Topic Modeling with Semantically-Grounded Soft Label Distributions, by Raymond Li and 4 other authors View PDF HTML (experimental) Abstract:Traditional neural topic models are typically optimized by reconstructing the document's Bag-of-Words (BoW) representations, overlooking contextual information and struggling with data sparsity. In this work, we propose a novel approach to construct semantically-grounded soft label targets using Language Models (LMs) by projecting the next token probabilities, conditioned on a specialized prompt, onto a pre-defined vocabulary to obtain contextually enriched supervision signals. By training the topic models to reconstruct the soft labels using the LM hidden states, our method produces higher-quality topics that are more closely aligned with the underlying thematic structure of the corpus. Experiments on three datasets show that our method achieves substantial improvements in topic coherence, purity over existing baselines. Additionally, we also introduce a retrieval-based metric, which shows that our approach significantly outperforms existing methods in identifying semantically similar documents, highlighting its effective...