However, the AI would sometimes provide incorrect information, a problem referred to as "hallucination". To combat this, Microsoft incorporated its own data into the Security Copilot product to provide more up-to-date and relevant information. Despite the challenges, Microsoft believes in the potential of the technology, with the Security Copilot described as a "closed-loop learning system" that improves over time. The company plans to make the product generally available this summer.
Key takeaways:
- Microsoft's Security Copilot, one of its most important AI products, was introduced in 2023 and uses OpenAI's GPT-4 and an in-house model to answer questions about cyberthreats.
- Microsoft had to "cherry pick" examples during the development of Security Copilot due to the AI model's tendency to "hallucinate" or produce incorrect or irrelevant outputs.
- The company has incorporated its own data into the Security Copilot product to provide more up-to-date and relevant information, and to help solve hallucination issues.
- Microsoft's Security Copilot is described as a "closed-loop learning system" that improves over time based on user feedback.