[2602.22246] Self-Purification Mitigates Backdoors in Multimodal Diffusion Language Models

[2602.22246] Self-Purification Mitigates Backdoors in Multimodal Diffusion Language Models

arXiv - Machine Learning 4 min read Article

Summary

This article presents a framework called DiSP (Diffusion Self-Purification) to mitigate backdoor attacks in Multimodal Diffusion Language Models (MDLMs), demonstrating its effectiveness in restoring model functionality while maintaining performance on clean inputs.

Why It Matters

As AI models become increasingly integrated into various applications, their security against backdoor attacks is critical. This research addresses a significant vulnerability in MDLMs, offering a novel defense mechanism that enhances AI safety and reliability, which is essential for trust in AI systems.

Key Takeaways

  • MDLMs are vulnerable to backdoor attacks, allowing manipulation through specific triggers.
  • The DiSP framework effectively neutralizes backdoor behaviors by masking vision tokens during inference.
  • Purifying the poisoned dataset using the compromised model itself can restore normal functionality.
  • DiSP reduces the attack success rate from over 90% to under 5% while maintaining model performance.
  • This approach does not require auxiliary models or clean reference data, simplifying implementation.

Computer Science > Cryptography and Security arXiv:2602.22246 (cs) [Submitted on 24 Feb 2026] Title:Self-Purification Mitigates Backdoors in Multimodal Diffusion Language Models Authors:Guangnian Wan, Qi Li, Gongfan Fang, Xinyin Ma, Xinchao Wang View a PDF of the paper titled Self-Purification Mitigates Backdoors in Multimodal Diffusion Language Models, by Guangnian Wan and 4 other authors View PDF HTML (experimental) Abstract:Multimodal Diffusion Language Models (MDLMs) have recently emerged as a competitive alternative to their autoregressive counterparts. Yet their vulnerability to backdoor attacks remains largely unexplored. In this work, we show that well-established data-poisoning pipelines can successfully implant backdoors into MDLMs, enabling attackers to manipulate model behavior via specific triggers while maintaining normal performance on clean inputs. However, defense strategies effective to these models are yet to emerge. To bridge this gap, we introduce a backdoor defense framework for MDLMs named DiSP (Diffusion Self-Purification). DiSP is driven by a key observation: selectively masking certain vision tokens at inference time can neutralize a backdoored model's trigger-induced behaviors and restore normal functionality. Building on this, we purify the poisoned dataset using the compromised model itself, then fine-tune the model on the purified data to recover it to a clean one. Given such a specific design, DiSP can remove backdoors without requiring any a...

Related Articles

[2603.18532] Scaling Sim-to-Real Reinforcement Learning for Robot VLAs with Generative 3D Worlds
Llms

[2603.18532] Scaling Sim-to-Real Reinforcement Learning for Robot VLAs with Generative 3D Worlds

Abstract page for arXiv paper 2603.18532: Scaling Sim-to-Real Reinforcement Learning for Robot VLAs with Generative 3D Worlds

arXiv - Machine Learning · 4 min ·
[2603.12702] FGTR: Fine-Grained Multi-Table Retrieval via Hierarchical LLM Reasoning
Llms

[2603.12702] FGTR: Fine-Grained Multi-Table Retrieval via Hierarchical LLM Reasoning

Abstract page for arXiv paper 2603.12702: FGTR: Fine-Grained Multi-Table Retrieval via Hierarchical LLM Reasoning

arXiv - Machine Learning · 4 min ·
[2603.12681] Colluding LoRA: A Compositional Vulnerability in LLM Safety Alignment
Llms

[2603.12681] Colluding LoRA: A Compositional Vulnerability in LLM Safety Alignment

Abstract page for arXiv paper 2603.12681: Colluding LoRA: A Compositional Vulnerability in LLM Safety Alignment

arXiv - Machine Learning · 3 min ·
[2602.06098] A Theoretical Analysis of Test-Driven LLM Code Generation
Llms

[2602.06098] A Theoretical Analysis of Test-Driven LLM Code Generation

Abstract page for arXiv paper 2602.06098: A Theoretical Analysis of Test-Driven LLM Code Generation

arXiv - Machine Learning · 3 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime