Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Google is bringing generative AI to its security tooling | TechCrunch

Aug 29, 2023 - news.bensbites.co
Google Cloud Next has announced several generative AI enhancements to its security product line, aiming to simplify the process of finding information from large amounts of security data. The new capabilities, as explained by Steph Hay, head of UX for cloud security at Google, are designed to mitigate threats, reduce the workload of security teams, and bridge the cyber talent gap. The enhancements include Duet AI in Mandiant Threat Intelligence, Duet AI for Chronicle Security Operations, and Duet AI in Security Command Center, all of which aim to help security teams understand threats better and respond effectively.

The company acquired security intelligence tool Mandiant last year, providing valuable data about security threats to its customers. The new Duet AI tools help in summarizing this data, identifying threats, and recommending actions. However, the effectiveness of these tools will depend on the quality of the summaries and recommendations provided. Google is also addressing the "hallucination problem" of AI, where large language models make up things when they don't have a clear answer, by providing a more limited data set based on the information from these security tools. The three Duet AI products are currently in preview and will be released later this year.

Key takeaways:

  • Google Cloud Next announced several new generative AI enhancements to its security product line, aiming to make it easier to find information from a massive amount of security data by simply asking questions in plain language.
  • The company is introducing Duet AI in Mandiant Threat Intelligence, Chronicle Security Operations, and Security Command Center, which helps security teams understand the mass of information they are seeing by providing a relevant summary to help quickly grasp the nature of a particular threat.
  • The usefulness of these AI tools could depend on the quality of the summary and recommendations that the model gives back, and how well less skilled analysts can understand the information they are getting.
  • The hallucination problem, where large language models make up things when they don’t have a clear answer, could be a huge issue when it comes security. However, Google believes that providing a more limited data set, based on the information these security tools, could help mitigate that problem.
View Full Article

Comments (0)

Be the first to comment!