Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Microsoft cherry picked outputs from AI product due to hallucinations

Jan 10, 2024 - businessinsider.com
Microsoft's Security Copilot, one of its most important AI products, was built using OpenAI's GPT-4 and an in-house model to answer questions about cyberthreats. The development process was challenging due to a lack of computing resources and the need to "cherry pick" examples due to the AI's hallucinations. The company initially worked on its own machine-learning models for security use cases, but shifted focus to GPT-4 after gaining early access. The AI was tested by showing it security logs and seeing if it could understand the content and identify any malicious activity.

However, the AI would sometimes provide incorrect information, a problem referred to as "hallucination". To combat this, Microsoft incorporated its own data into the Security Copilot product to provide more up-to-date and relevant information. Despite the challenges, Microsoft believes in the potential of the technology, with the Security Copilot described as a "closed-loop learning system" that improves over time. The company plans to make the product generally available this summer.

Key takeaways:

  • Microsoft's Security Copilot, one of its most important AI products, was introduced in 2023 and uses OpenAI's GPT-4 and an in-house model to answer questions about cyberthreats.
  • Microsoft had to "cherry pick" examples during the development of Security Copilot due to the AI model's tendency to "hallucinate" or produce incorrect or irrelevant outputs.
  • The company has incorporated its own data into the Security Copilot product to provide more up-to-date and relevant information, and to help solve hallucination issues.
  • Microsoft's Security Copilot is described as a "closed-loop learning system" that improves over time based on user feedback.
View Full Article

Comments (0)

Be the first to comment!