ByteDance's new AI video generation model, Dreamina Seedance 2.0, comes to CapCut | TechCrunch
About this article
The new model in CapCut will have built-in protections for making video from real faces or unauthorized intellectual property.
OpenAI may be dialing back its efforts in the video generation market with the shutdown of its Sora app, but ByteDance on Thursday confirmed that its new audio and video model, Dreamina Seedance 2.0, is now rolling out in its editing platform, CapCut. ByteDance says the model allows creators to draft, edit, and sync video and audio content by using prompts, images, or reference videos. The phased rollout will begin with CapCut users in Brazil, Indonesia, Malaysia, Mexico, the Philippines, Thailand, and Vietnam, with more markets added over time. The news of the launch in CapCut follows a recent report that said the model’s global rollout would be paused, while it worked to address intellectual property issues that drew criticism from Hollywood over alleged copyright infringement. That likely explains the limited number of markets where the model is currently available within CapCut. In China, the model is available to users of ByteDance’s Jianying app. Image Credits:ByteDance The video generation model works without reference images, even if the creator only uses a few words to describe the scene they have in mind, ByteDance says in its announcement. CapCut is also good at rendering realistic textures, movement, and lighting across a range of visual perspectives and angles, which the company notes could be used to edit, enhance, or correct creators’ own footage. Another use case would be allowing creators to test potential ideas based on early concepts or sketches before f...