The article also explores the creative capabilities of GPT4, noting that while it can generate and evaluate ideas, it does not share the user's goals and thus the user must ultimately evaluate the outputs. The author also warns of the dangers of AI-generated content, citing an example of a supermarket app that suggested potentially deadly recipes. The author concludes by emphasizing the importance of understanding the limitations of LLMs and their grasp on reality, even when used for creative purposes.
Key takeaways:
- Sam Altman, OpenAI’s CEO, views the ability of AI systems like ChatGPT to 'hallucinate' or generate nonsensical outputs as a part of their creative power, not a flaw.
- Large language models (LLMs) like GPT do not check the validity of their outputs against any external perception, meaning everything they do could be considered a 'hallucination'.
- While GPT can generate and evaluate ideas, it does not share the user's goals, making it a generatively creative tool rather than an adaptively creative one.
- Users of GPT need to understand that it operates on word probabilities, not human concerns, and that it is not designed to report accurate information.