While no significant attacks using LLMs have been detected yet, Microsoft and OpenAI are actively shutting down associated accounts and assets of these hacking groups. Microsoft warns of potential future use of AI in cyberattacks, including voice impersonation and AI-powered fraud. In response, Microsoft is developing AI tools, such as the Security Copilot, to counter these AI attacks and assist cybersecurity professionals in identifying breaches.
Key takeaways:
- Microsoft and OpenAI have detected attempts by Russian, North Korean, Iranian, and Chinese-backed groups using large language models (LLMs) like ChatGPT to refine their cyberattacks.
- The Strontium group, linked to Russian military intelligence, and other hacking groups have been using LLMs to understand complex technologies, improve scripts, and build social engineering techniques.
- Microsoft and OpenAI have not detected any significant attacks using LLMs yet, but they have been shutting down all accounts and assets associated with these hacking groups.
- Microsoft is developing a new AI assistant, Security Copilot, designed for cybersecurity professionals to identify breaches and better understand the huge amounts of signals and data generated through cybersecurity tools daily.