The concept of a 'proof of humanity' word, first proposed by AI developer Asara Near, is gaining traction as a countermeasure against AI voice synthesis and deepfakes. This secret word, known only to trusted contacts, can be used to verify the identity of the person during a suspicious voice or video call. Despite the high-tech nature of AI identity fraud, this simple and ancient method of using a secret word or phrase for identity verification remains effective.
Key takeaways:
- The FBI has warned that criminals are using AI models to create convincing scams, including voice scams, fake profile photos, identification documents, and chatbots on fraudulent websites.
- The FBI recommends limiting public access to personal voice recordings and images online, suggesting making social media accounts private and restricting followers to known contacts.
- The concept of a 'proof of humanity' word, a secret word used to verify identity in the context of AI voice synthesis and deepfakes, was first proposed by AI developer Asara Near in 2023.
- The idea of using a secret word or phrase to verify identity, similar to a password, is becoming common in the AI research community and is seen as a simple and free method to combat AI identity fraud.