The research also revealed that hallucination rates vary among leading AI companies. OpenAI's technologies had the lowest rate at around 3%, while Google's system, Palm chat, had the highest rate at 27%. Vectara hopes that their research will encourage efforts across the industry to reduce hallucinations. However, researchers warn that chatbot hallucination is not an easy problem to solve due to the way chatbots learn from patterns in data and operate according to probabilities.
Key takeaways:
- Chatbots, such as ChatGPT and Google's chatbot, often 'hallucinate' or invent information, with rates varying from 3% to 27% according to research from start-up Vectara.
- This behavior is a serious issue when using chatbots for court documents, medical information, or sensitive business data.
- Vectara's research also showed that hallucination rates vary widely among leading AI companies, with Google's system having the highest rate.
- Despite efforts to reduce hallucinations, researchers warn that it is not an easy problem to solve due to the probabilistic nature of chatbots and their learning from patterns in data.