Greenwald also revealed that the examples they demoed to the government were cherry-picked, but it's unclear if Microsoft was transparent about this during its presentations. A Microsoft spokesperson clarified that the technology discussed was exploratory work that predated Security Copilot and was tested on simulations created from public datasets, with no customer data used.
Key takeaways:
- Microsoft reportedly "cherry-picked" examples of its generative AI's output after it frequently "hallucinated" incorrect responses, according to leaked audio of an internal presentation on an early version of Microsoft's Security Copilot.
- The AI tool, which is designed to help cybersecurity professionals, was tested on a Windows security log for possible malicious activity. However, it often gave different answers when asked the same questions.
- Security Copilot is largely built on OpenAI's GPT-4 large language model, which Microsoft had early access to. The AI was not trained on cybersecurity specific data, but relied on its general dataset.
- It's unclear if Microsoft used these "cherry-picked" examples in its presentations to the government and other potential customers. A Microsoft spokesperson said that the technology discussed was exploratory work that predated Security Copilot and was tested on simulations created from public data sets.