Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Sam Altman Admits That OpenAI Doesn't Actually Understand How Its AI Works

Jun 04, 2024 - futurism.com
OpenAI, a leading AI development company, is struggling to understand how its large language models (LLMs) function. During a recent AI summit, CEO Sam Altman admitted that the company has not yet solved interpretability, meaning they cannot trace back their AI models' output and the decisions it made. Despite this, Altman reassured that their AI models are generally considered safe and robust, a statement that was met with skepticism.

The issue of AI interpretability is a significant problem in the AI industry, with many researchers struggling to understand the inner workings of their systems. A recent UK government report concluded that AI developers understand little about how their systems operate. Other AI companies, such as Anthropic, are investing in interpretability research to better understand their models and improve safety. However, the process is challenging and costly, and the industry is still far from fully understanding AI systems.

Key takeaways:

  • OpenAI CEO Sam Altman admitted during the International Telecommunication Union AI for Good Global Summit that the company is struggling to understand how its large language models (LLM) function and how to interpret their outputs.
  • Despite concerns raised about the safety of releasing new, more powerful models without fully understanding them, Altman reassured that the AIs are "generally considered safe and robust."
  • A recent scientific report commissioned by the UK government concluded that AI developers "understand little about how their systems operate" and that scientific knowledge about AI is "very limited."
  • OpenAI competitor Anthropic is investing in interpretability research to better understand and improve the safety of their AI models, but admits that the work has only just begun and is proving to be challenging.
View Full Article

Comments (0)

Be the first to comment!