[2602.23391] Detoxifying LLMs via Representation Erasure-Based Preference Optimization
About this article
Abstract page for arXiv paper 2602.23391: Detoxifying LLMs via Representation Erasure-Based Preference Optimization
Computer Science > Machine Learning arXiv:2602.23391 (cs) [Submitted on 24 Feb 2026] Title:Detoxifying LLMs via Representation Erasure-Based Preference Optimization Authors:Nazanin Mohammadi Sepahvand, Eleni Triantafillou, Hugo Larochelle, Doina Precup, Daniel M. Roy, Gintare Karolina Dziugaite View a PDF of the paper titled Detoxifying LLMs via Representation Erasure-Based Preference Optimization, by Nazanin Mohammadi Sepahvand and 5 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) trained on webscale data can produce toxic outputs, raising concerns for safe deployment. Prior defenses, based on applications of DPO, NPO, and similar algorithms, reduce the likelihood of harmful continuations, but not robustly so: they are vulnerable to adversarial prompting and easily undone by fine-tuning-based relearning attacks. Indeed, research has shown that these edits to the model are superficial: linear probing reveals that harmful "directions" remain present in representations. To address this, we propose Representation Erasure-based Preference Optimization (REPO), reformulating detoxification as a token-level preference problem. Using a novel objective with preference data, we force the representations of toxic continuations to converge toward their benign counterparts. Our mechanistic analysis reveals that this granular approach is critical: unlike baselines, REPO induces deep, localized edits to toxicity-encoding neurons while preserving general m...