[2602.18171] Click it or Leave it: Detecting and Spoiling Clickbait with Informativeness Measures and Large Language Models

[2602.18171] Click it or Leave it: Detecting and Spoiling Clickbait with Informativeness Measures and Large Language Models

arXiv - AI 3 min read Article

Summary

This paper presents a hybrid approach to detecting clickbait using large language models and informativeness measures, achieving a high F1-score in classification.

Why It Matters

Clickbait headlines compromise the integrity of online information and erode user trust. This research provides a robust method for detecting clickbait, which can enhance content quality and user experience across digital platforms. By improving detection methods, the study contributes to the broader conversation about information reliability in the age of digital media.

Key Takeaways

  • The proposed model combines transformer-based embeddings with linguistic features for effective clickbait detection.
  • An F1-score of 91% was achieved, outperforming traditional methods like TF-IDF and Word2Vec.
  • The model enhances interpretability by identifying key linguistic cues associated with clickbait.
  • Code and trained models are made available for reproducible research, promoting transparency.
  • This research addresses the growing concern of misinformation in digital content.

Computer Science > Computation and Language arXiv:2602.18171 (cs) [Submitted on 20 Feb 2026] Title:Click it or Leave it: Detecting and Spoiling Clickbait with Informativeness Measures and Large Language Models Authors:Wojciech Michaluk, Tymoteusz Urban, Mateusz Kubita, Soveatin Kuntur, Anna Wroblewska View a PDF of the paper titled Click it or Leave it: Detecting and Spoiling Clickbait with Informativeness Measures and Large Language Models, by Wojciech Michaluk and 4 other authors View PDF HTML (experimental) Abstract:Clickbait headlines degrade the quality of online information and undermine user trust. We present a hybrid approach to clickbait detection that combines transformer-based text embeddings with linguistically motivated informativeness features. Using natural language processing techniques, we evaluate classical vectorizers, word embedding baselines, and large language model embeddings paired with tree-based classifiers. Our best-performing model, XGBoost over embeddings augmented with 15 explicit features, achieves an F1-score of 91\%, outperforming TF-IDF, Word2Vec, GloVe, LLM prompt based classification, and feature-only baselines. The proposed feature set enhances interpretability by highlighting salient linguistic cues such as second-person pronouns, superlatives, numerals, and attention-oriented punctuation, enabling transparent and well-calibrated clickbait predictions. We release code and trained models to support reproducible research. Subjects: Compu...

Related Articles

Llms

I think we’re about to have a new kind of “SEO”… and nobody is talking about it.

More people are asking ChatGPT things like: “what’s the best CRM?” “is this tool worth it?” “alternatives to X” And they just… trust the ...

Reddit - Artificial Intelligence · 1 min ·
Llms

Why would Claude give me the same response over and over and give others different replies?

I asked Claude to "generate me a random word" so I could do some word play. Then I asked it again in a new prompt window on desktop after...

Reddit - Artificial Intelligence · 1 min ·
Anthropic blocks OpenClaw from Claude subscriptions
Llms

Anthropic blocks OpenClaw from Claude subscriptions

Anthropic forces pay-as-you-go pricing for OpenClaw users after creator joins OpenAI

AI Tools & Products · 6 min ·
Llms

wtf bro did what? arc 3 2026

The Physarum Explorer is a high-speed, bio-inspired neural model designed specifically for ARC geometry. Here is the snapshot of its curr...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime