[2510.10987] DITTO: A Spoofing Attack Framework on Watermarked LLMs via Knowledge Distillation

[2510.10987] DITTO: A Spoofing Attack Framework on Watermarked LLMs via Knowledge Distillation

arXiv - AI 4 min read Article

Summary

The paper introduces DITTO, a spoofing attack framework that exploits vulnerabilities in watermarked large language models (LLMs) via knowledge distillation, revealing critical security flaws in text authorship verification.

Why It Matters

As LLMs become increasingly integrated into various applications, ensuring the integrity of their outputs is crucial. This research highlights a significant security gap that could lead to the misuse of AI-generated content, emphasizing the need for improved watermarking technologies.

Key Takeaways

  • The assumption that watermarks guarantee authorship is flawed.
  • Watermark spoofing can misattribute harmful content to reputable sources.
  • Knowledge distillation can be exploited to replicate watermarks.
  • This research calls for advancements in watermarking technologies.
  • Understanding these vulnerabilities is essential for AI safety.

Computer Science > Cryptography and Security arXiv:2510.10987 (cs) [Submitted on 13 Oct 2025 (v1), last revised 23 Feb 2026 (this version, v3)] Title:DITTO: A Spoofing Attack Framework on Watermarked LLMs via Knowledge Distillation Authors:Hyeseon An, Shinwoo Park, Suyeon Woo, Yo-Sub Han View a PDF of the paper titled DITTO: A Spoofing Attack Framework on Watermarked LLMs via Knowledge Distillation, by Hyeseon An and 3 other authors View PDF HTML (experimental) Abstract:The promise of LLM watermarking rests on a core assumption that a specific watermark proves authorship by a specific model. We demonstrate that this assumption is dangerously flawed. We introduce the threat of watermark spoofing, a sophisticated attack that allows a malicious model to generate text containing the authentic-looking watermark of a trusted, victim model. This enables the seamless misattribution of harmful content, such as disinformation, to reputable sources. The key to our attack is repurposing watermark radioactivity, the unintended inheritance of data patterns during fine-tuning, from a discoverable trait into an attack vector. By distilling knowledge from a watermarked teacher model, our framework allows an attacker to steal and replicate the watermarking signal of the victim model. This work reveals a critical security gap in text authorship verification and calls for a paradigm shift towards technologies capable of distinguishing authentic watermarks from expertly imitated ones. Our code...

Related Articles

Llms

I think we’re about to have a new kind of “SEO”… and nobody is talking about it.

More people are asking ChatGPT things like: “what’s the best CRM?” “is this tool worth it?” “alternatives to X” And they just… trust the ...

Reddit - Artificial Intelligence · 1 min ·
Llms

Why would Claude give me the same response over and over and give others different replies?

I asked Claude to "generate me a random word" so I could do some word play. Then I asked it again in a new prompt window on desktop after...

Reddit - Artificial Intelligence · 1 min ·
Anthropic essentially bans OpenClaw from Claude by making subscribers pay extra | The Verge
Llms

Anthropic essentially bans OpenClaw from Claude by making subscribers pay extra | The Verge

The popular combination of OpenClaw and Claude Code is being severed now that Anthropic has announced it will start charging subscribers ...

The Verge - AI · 4 min ·
Llms

wtf bro did what? arc 3 2026

The Physarum Explorer is a high-speed, bio-inspired neural model designed specifically for ARC geometry. Here is the snapshot of its curr...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime