Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

FBI: AI Makes it Easier for Hackers to Generate Attacks

Jul 31, 2023 - tomshardware.com
The FBI has issued a warning about the increasing use of AI technology in cyberattacks, including phishing and malware development. The agency highlighted the rise of open-source AI models, which can be trained to specific needs, as a particular concern. Examples include the use of AI models such as ChatGPT to develop malware that can evade security systems, and subscription-based offerings like WormGPT that provide a convenient environment for launching remote phishing attacks.

The FBI also expressed concerns about the use of generative AI technology in creating deepfakes, which can be used to create false realities. The agency stressed the need for watermarking technology to distinguish between synthetic and emergent data. Despite AI giants like OpenAI, Microsoft, Google, and Meta pledging to introduce such technology, the proliferation of privately-tailored, open-source AI technology is inevitable, and containment efforts are likely to be ineffective.

Key takeaways:

  • The FBI has warned about the increasing use of AI technology in cyberattacks, including phishing attacks and malware development.
  • Open-source AI models are a particular focus for law enforcement, as they can be easily adapted for malicious purposes.
  • There are growing security concerns around the use of generative AI technology in creating deepfakes, which can be used to spread misinformation and cause harm.
  • Major AI companies, including OpenAI, Microsoft, Google and Meta, have pledged to introduce watermarking technology to help distinguish between synthetic and emergent data.
View Full Article

Comments (0)

Be the first to comment!