Voice Cloning with Consent
About this article
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Back to Articles Voice Cloning with Consent Published October 28, 2025 Update on GitHub Upvote 36 +30 Margaret Mitchell meg Follow Lucie-Aimée Kaffee frimelle Follow In this blog post, we introduce the idea of a 'voice consent gate' to support voice cloning with consent. We provide an example Space and accompanying code to start the ball rolling on the idea. Realistic voice generation technology has gotten uncannily good in the past few years. In some situations, it’s possible to generate a synthetic voice that sounds almost exactly like the voice of a real person. And today, what once felt like science fiction is reality: Voice cloning. With just a few seconds of recorded speech, anyone’s voice can be made to say almost anything. Voice generation, and in particular the subtask of voice cloning, has notable risks and benefits. The risks of “deepfakes”, such as the cloned voice of former President Biden used in robocalls, can mislead people into thinking that people have said things that they haven’t said. On the other hand, voice cloning can be a powerful beneficial tool, helping people who’ve lost the ability to speak communicate in their own voice again, or assisting people in learning new languages and dialects. So how do we create meaningful use without malicious use? We’re exploring one possible answer: a voice consent gate. That’s a system where a voice can be cloned only when the speaker explicitly says they consent. In other words, the model won’t speak in your voi...