Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Your AI clone could target your family, but there’s a simple defense

Dec 07, 2024 - arstechnica.com
The FBI has issued a warning about the increasing use of AI models by criminals to generate convincing profile photos, identification documents, and chatbots for fraudulent activities. The bureau recommends limiting public access to personal voice recordings and images online, making social media accounts private, and restricting followers to known contacts to avoid falling victim to these scams.

The concept of a 'proof of humanity' word, first proposed by AI developer Asara Near, is gaining traction as a countermeasure against AI voice synthesis and deepfakes. This secret word, known only to trusted contacts, can be used to verify the identity of the person during a suspicious voice or video call. Despite the high-tech nature of AI identity fraud, this simple and ancient method of using a secret word or phrase for identity verification remains effective.

Key takeaways:

  • The FBI has warned that criminals are using AI models to create convincing scams, including voice scams, fake profile photos, identification documents, and chatbots on fraudulent websites.
  • The FBI recommends limiting public access to personal voice recordings and images online, suggesting making social media accounts private and restricting followers to known contacts.
  • The concept of a 'proof of humanity' word, a secret word used to verify identity in the context of AI voice synthesis and deepfakes, was first proposed by AI developer Asara Near in 2023.
  • The idea of using a secret word or phrase to verify identity, similar to a password, is becoming common in the AI research community and is seen as a simple and free method to combat AI identity fraud.
View Full Article

Comments (0)

Be the first to comment!