The author further argues that the belief that generative AI is equivalent to AGI (general purpose artificial intelligence) is a fundamental error. Despite claims from industry leaders that AGI is imminent, the author points out several unsolved problems with generative AI, such as their tendency to generate false information and their instability. The author concludes by suggesting that if these issues are not fixable, generative AI may not have the impact people are expecting, and we should reconsider building our world around this premise.
Key takeaways:
- The hype surrounding generative AI and its potential trillion dollar markets may not be justified as current revenues are only in the hundreds of millions and growth is speculative.
- Generative AI's main sources of revenue are from writing semi-automatic code and text, but other potential paying customers may lose interest quickly due to its limitations and errors.
- There are serious, unsolved problems at the core of generative AI, including their tendency to confabulate false information, inability to reliably interface with external tools, and instability.
- The author, Gary Marcus, warns against building global and national policy on the premise that generative AI will be world-changing, as this could lead to unnecessary tension and potential exploitation of consumers.