Has Google’s AI watermarking system been reverse-engineered? | The Verge
About this article
A software developer claims to have reverse-engineered Google DeepMind’s SynthID system, showing how AI watermarks can be stripped from generated images.
AINewsTechHas Google’s AI watermarking system been reverse-engineered?Kind of, but not really. It’s complicated.Kind of, but not really. It’s complicated.by Jess WeatherbedApr 14, 2026, 1:53 PM UTCLinkShareGiftImage: Cath Virginia / The Verge, Getty ImagesJess Weatherbed is a news writer focused on creative industries, computing, and internet culture. Jess started her career at TechRadar, covering news and hardware reviews.A software developer claims to have reverse-engineered Google DeepMind’s SynthID system, showing how AI watermarks can be stripped from generated images or manually inserted into other works. A claim that, according to Google, isn’t true.The developer, going by the username Aloshdenny, has open-sourced their work on GitHub and documented his process, claiming all it required was 200 Gemini-generated images, signal processing, and “way too much free time.” A little weed also seemed to help.“No neural networks. No proprietary access,” Aloshdenny said on Medium. “Turns out if you’re unemployed and average enough ‘pure black’ AI-generated images, every nonzero pixel is literally just the watermark staring back at you.”SynthID is a near-invisible watermarking system that tags content generated by Google’s AI tools, embedding itself in the pixels of images at the point of creation. It was designed to be difficult to remove without degrading the image quality, and is used widely across the AI products offered by Google — everything spat out by models like Nan...