The report also predicts that AI will increase the value and impact of cyberattacks over the next two years, with threat actors likely able to select high-value assets for inspection and exfiltration due to AI's rapid data summarization. It is expected that by 2025, phishing, spoofing, and social engineering attempts will be impossible to recognize, making it harder to differentiate scams from legitimate practices. The report echoes similar warnings from Google last year about the use of large language models (LLMs) and GenAI in cyberattacks.
Key takeaways:
- Artificial Intelligence (AI) is expected to aid cybercriminals in carrying out cyberattacks, increasing the volume of cyberattacks such as ransomware assaults and phishing scams, according to a report by the UK's Government Communications Headquarters (GCHQ).
- AI will improve threat actors' social engineering capabilities, enabling convincing contact with victims and increasing the value and effect of cyberattacks over the next two years.
- AI's advances will make it harder to differentiate scams from legitimate practices, making phishing, spoofing, and social engineering attempts impossible to recognize by 2025.
- The report echoes Google's forecast from last year, predicting that large language models (LLMs) and generative AI tools will be used in cyberattacks to make information seem more authentic.