[2511.06391] HatePrototypes: Interpretable and Transferable Representations for Implicit and Explicit Hate Speech Detection
About this article
Abstract page for arXiv paper 2511.06391: HatePrototypes: Interpretable and Transferable Representations for Implicit and Explicit Hate Speech Detection
Computer Science > Computation and Language arXiv:2511.06391 (cs) [Submitted on 9 Nov 2025 (v1), last revised 5 Apr 2026 (this version, v3)] Title:HatePrototypes: Interpretable and Transferable Representations for Implicit and Explicit Hate Speech Detection Authors:Irina Proskurina, Marc-Antoine Carpentier, Julien Velcin View a PDF of the paper titled HatePrototypes: Interpretable and Transferable Representations for Implicit and Explicit Hate Speech Detection, by Irina Proskurina and 2 other authors View PDF HTML (experimental) Abstract:Optimization of offensive content moderation models for different types of hateful messages is typically achieved through continued pre-training or fine-tuning on new hate speech benchmarks. However, existing benchmarks mainly address explicit hate toward protected groups and often overlook implicit or indirect hate, such as demeaning comparisons, calls for exclusion or violence, and subtle discriminatory language that still causes harm. While explicit hate can often be captured through surface features, implicit hate requires deeper, full-model semantic processing. In this work, we question the need for repeated fine-tuning and analyze the role of HatePrototypes, class-level vector representations derived from language models optimized for hate speech detection and safety moderation. We find that these prototypes, built from as few as 50 examples per class, enable cross-task transfer between explicit and implicit hate, with interchangeabl...