To mitigate new account fraud, organizations are advised to watch out for unusual movements in AI-generated videos, be wary of extremely high-resolution videos, and gather threat intelligence to identify cybercrime trends. As AI systems become more sophisticated, organizations must adapt their defenses and use more AI applications to protect themselves against evolving threats like new account fraud.
Key takeaways:
- New account fraud, where threat actors open new accounts using false identities, is costing businesses billions of dollars each year and is used for financial gain, money laundering, access to services, and to obtain benefits.
- Security researchers have discovered a new deepfake tool on the dark web, sold by a threat actor known as ProKYC, which can bypass two-factor authentication and create fake personas that can fool facial recognition detection scanners.
- Threat actors use AI to create a fake image of a person, synthesize a fake passport or a government-issued ID, generate a deepfake video, and initiate a new account fraud attack where they connect a cryptocurrency exchange and proceed to upload the forged document.
- Organizations can mitigate new account fraud by watching out for jittery or unusual movements in AI-generated videos, being wary of extremely high-resolution videos, and gathering threat intelligence to identify cybercrime trends.