Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Say What?! The Rise of AI-Generated Voice Deepfakes

Sep 13, 2023 - natlawreview.com
The article discusses the rise of "voice deepfakes," where synthetic voices are created using generative artificial intelligence (AI) to mimic real people's voices. This technology has been used for malicious purposes, such as tricking identity verification systems or creating content without the voice owner's consent. The issue has particularly affected professional voice actors, whose voices have been used without their knowledge or payment. Legal options for those affected are limited, as voices are not copyrightable and the application of existing laws can be challenging.

The article suggests several measures to combat voice deepfakes. For companies, anti-spoofing measures, employee education, multifactor authentication, and anti-fraud solutions are recommended. In China, new rules require manipulated material to bear digital signatures or watermarks and have the subject's consent. For voice actors, preconditions or compensation structures for AI use, takedown demands, and support from trade associations are suggested. The article concludes that awareness is the first line of defense, as the law struggles to keep up with this new technology.

Key takeaways:

  • Several tech companies are training speech recognition tools to mimic the speaker’s voice, leading to the prevalence of “voice deepfakes”—creating synthetic voices from unknowing or unwilling participants using generative artificial intelligence.
  • These voice deepfakes have been used to trick identity verification software and gain access to accounts, and have also been used to create content without the consent of the original voice owner.
  • There are several anti-spoofing measures being developed to combat voice deepfakes, including educating employees about the danger of deepfakes, allowing callback functions to end a suspicious call, multifactor authentication, and anti-fraud solutions.
  • Professional voice actors are the “canary in the coal mine” when it comes to creating rules to combat voice deepfakes, and legislation to impose transparency for data input and tracking models used by AI developers would be a positive step.
View Full Article

Comments (0)

Be the first to comment!