AI voice cloning has been used in several scams, including tricking banks into moving money and carrying out "family emergency" scams. This technology is made possible by feeding an AI model enough training data to understand the nuances of an individual's voice. Celebrities and public figures are particularly at risk due to the number of their voice recordings available online. However, with the rise of social media and video content, non-celebrities and children also have enough material online to train a model effectively.
Key takeaways:
- The Federal Trade Commission (FTC) is offering a $25,000 reward for the best solution to combat the threat of AI voice cloning, also known as audio deepfakes.
- The FTC is particularly interested in ideas that focus on the prevention, monitoring, and evaluation of AI-based voice cloning fraud.
- Submissions will be evaluated based on feasibility, resilience to technological changes, and consideration of liability and responsibility placement on companies.
- AI voice cloning has been proven effective in several cases, with criminals using it to impersonate individuals and trick others into sending money or revealing sensitive information.