Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Scholars: AI isn’t “hallucinating” — it’s bullshitting

Jun 09, 2024 - news.bensbites.com
The article discusses a paper published in "Ethics and Information Technology" by scholars from the University of Glasgow, arguing that inaccuracies generated by large language models (LLMs) like OpenAI's ChatGPT are better described as "bullshit" rather than "AI hallucinations". The term "bullshit", as defined by philosopher Harry Frankfurt, refers to statements made without regard for their truthfulness, serving only to impress or persuade. The scholars argue that LLMs, which generate text based on statistical patterns without any intrinsic concern for accuracy, fit this definition better than the concept of hallucination, as they produce plausible-sounding statements without any grounding in factual reality.

The distinction is significant as it influences how we understand and address the inaccuracies produced by these models. If inaccuracies are seen as hallucinations, it implies that the AI is trying and failing to convey truthful information. However, the scholars argue that AI models do not have beliefs, intentions, or understanding, and their inaccuracies are not due to misperception or hallucination, but because they are designed to create text that looks and sounds right without any intrinsic mechanism for ensuring factual accuracy. OpenAI has stated that improving the factual accuracy of ChatGPT is a key goal, with GPT-4 being 40% more likely to produce factual content than GPT-3.5.

Key takeaways:

  • Large language models (LLMs) like OpenAI’s ChatGPT, despite their impressive capabilities, are known for generating persistent inaccuracies, often referred to as “AI hallucinations.” Scholars argue that these inaccuracies are better understood as “bullshit.”
  • The term “AI hallucination” is misleading as it implies that the AI has a perspective or an intent to perceive and convey truth, which it does not. The output of LLMs fits the definition of bullshit better than the concept of hallucination as they generate text based on patterns in the data they have been trained on, without any intrinsic concern for accuracy.
  • Calling AI inaccuracies 'hallucinations' can lead to overblown hype about their abilities and suggests solutions to the inaccuracy problems which might not work. It can also lead to misguided efforts at AI alignment amongst specialists.
  • OpenAI has stated that improving the factual accuracy of ChatGPT is a key goal and they have made progress in this area, with GPT-4 being 40% more likely to produce factual content than GPT-3.5.
View Full Article

Comments (0)

Be the first to comment!