[2601.04448] Merging Triggers, Breaking Backdoors: Defensive Poisoning for Instruction-Tuned Language Models
About this article
Abstract page for arXiv paper 2601.04448: Merging Triggers, Breaking Backdoors: Defensive Poisoning for Instruction-Tuned Language Models
Computer Science > Computation and Language arXiv:2601.04448 (cs) [Submitted on 7 Jan 2026 (v1), last revised 31 Mar 2026 (this version, v2)] Title:Merging Triggers, Breaking Backdoors: Defensive Poisoning for Instruction-Tuned Language Models Authors:San Kim, Gary Geunbae Lee View a PDF of the paper titled Merging Triggers, Breaking Backdoors: Defensive Poisoning for Instruction-Tuned Language Models, by San Kim and 1 other authors View PDF HTML (experimental) Abstract:Large Language Models (LLMs) have greatly advanced Natural Language Processing (NLP), particularly through instruction tuning, which enables broad task generalization without additional fine-tuning. However, their reliance on large-scale datasets-often collected from human or web sources-makes them vulnerable to backdoor attacks, where adversaries poison a small subset of data to implant hidden behaviors. Despite this growing risk, defenses for instruction-tuned models remain underexplored. We propose MB-Defense (Merging & Breaking Defense Framework), a novel training pipeline that immunizes instruction-tuned LLMs against diverse backdoor threats. MB-Defense comprises two stages: (i) Defensive Poisoning, which merges attacker and defensive triggers into a unified backdoor representation, and (ii) Backdoor Neutralization, which breaks this representation through additional training to restore clean behavior. Extensive experiments across multiple LLMs show that MB-Defense substantially lowers attack success r...