[2510.10987] DITTO: A Spoofing Attack Framework on Watermarked LLMs via Knowledge Distillation
Summary
The paper introduces DITTO, a spoofing attack framework that exploits vulnerabilities in watermarked large language models (LLMs) via knowledge distillation, revealing critical security flaws in text authorship verification.
Why It Matters
As LLMs become increasingly integrated into various applications, ensuring the integrity of their outputs is crucial. This research highlights a significant security gap that could lead to the misuse of AI-generated content, emphasizing the need for improved watermarking technologies.
Key Takeaways
- The assumption that watermarks guarantee authorship is flawed.
- Watermark spoofing can misattribute harmful content to reputable sources.
- Knowledge distillation can be exploited to replicate watermarks.
- This research calls for advancements in watermarking technologies.
- Understanding these vulnerabilities is essential for AI safety.
Computer Science > Cryptography and Security arXiv:2510.10987 (cs) [Submitted on 13 Oct 2025 (v1), last revised 23 Feb 2026 (this version, v3)] Title:DITTO: A Spoofing Attack Framework on Watermarked LLMs via Knowledge Distillation Authors:Hyeseon An, Shinwoo Park, Suyeon Woo, Yo-Sub Han View a PDF of the paper titled DITTO: A Spoofing Attack Framework on Watermarked LLMs via Knowledge Distillation, by Hyeseon An and 3 other authors View PDF HTML (experimental) Abstract:The promise of LLM watermarking rests on a core assumption that a specific watermark proves authorship by a specific model. We demonstrate that this assumption is dangerously flawed. We introduce the threat of watermark spoofing, a sophisticated attack that allows a malicious model to generate text containing the authentic-looking watermark of a trusted, victim model. This enables the seamless misattribution of harmful content, such as disinformation, to reputable sources. The key to our attack is repurposing watermark radioactivity, the unintended inheritance of data patterns during fine-tuning, from a discoverable trait into an attack vector. By distilling knowledge from a watermarked teacher model, our framework allows an attacker to steal and replicate the watermarking signal of the victim model. This work reveals a critical security gap in text authorship verification and calls for a paradigm shift towards technologies capable of distinguishing authentic watermarks from expertly imitated ones. Our code...