AI hallucinations are a common flaw among generative AI and chatbots, with a Stanford University study finding them "pervasive and disturbing". OpenAI's ChatGPT 3.5 hallucinated 69% of the time when asked precise, verifiable questions, while Meta's Llama 2 model did so 88% of the time. Both Google and OpenAI are investigating ways to reduce hallucinations, with Google using user input and OpenAI adopting a technique known as "process supervision".
Key takeaways:
- Google's Gemini and Microsoft's Copilot, two major AI chatbots, have been found to fabricate data when asked about the outcome of Super Bowl LVIII.
- AI hallucinations, where AI fabricates data or outcomes, are a pervasive issue in generative AI and chatbots, with a Stanford University study finding them to be 'pervasive and disturbing'.
- Both Google and OpenAI are working on methods to reduce these hallucinations, with Google using user input and OpenAI using a technique called 'process supervision'.
- Despite the hallucinations, some professionals, such as lawyers, have been found to use AI chatbots like ChatGPT to write legal documents, even though these may contain references to fictitious court cases and false quotes.