The author concludes that while GPT-4 is useful, they do not believe it will lead to General AI. They suggest that the improvements in model performance are general rather than niche, and that the cost and speed of tokens are growing exponentially for incremental improvements. They anticipate that GPT-5 will offer incremental improvements rather than a significant leap forward.
Key takeaways:
- When it comes to prompts for GPT, less is more. Over-specifying can confuse the model and produce less accurate results.
- Improving latency with streaming API and showing users variable-speed typed words is a significant UX innovation with ChatGPT.
- GPT struggles with producing the null hypothesis and often chooses to hallucinate rather than return nothing.
- Despite its limitations, GPT is extremely reliable for use cases that involve analyzing, summarizing, or extracting information from a given block of text.