The FBI also expressed concerns about the use of generative AI technology in creating deepfakes, which can be used to create false realities. The agency stressed the need for watermarking technology to distinguish between synthetic and emergent data. Despite AI giants like OpenAI, Microsoft, Google, and Meta pledging to introduce such technology, the proliferation of privately-tailored, open-source AI technology is inevitable, and containment efforts are likely to be ineffective.
Key takeaways:
- The FBI has warned about the increasing use of AI technology in cyberattacks, including phishing attacks and malware development.
- Open-source AI models are a particular focus for law enforcement, as they can be easily adapted for malicious purposes.
- There are growing security concerns around the use of generative AI technology in creating deepfakes, which can be used to spread misinformation and cause harm.
- Major AI companies, including OpenAI, Microsoft, Google and Meta, have pledged to introduce watermarking technology to help distinguish between synthetic and emergent data.