Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Lessons after a half-billion GPT tokens - Ken Kantzer's Blog

Apr 13, 2024 - news.bensbites.com
The author shares seven lessons learned from using OpenAI's GPT-4 for their startup, gettruss.io. They found that less specific prompts produced better results, and the only API they needed was 'chat'. They also discovered that the streaming API improved user experience, GPT struggled with producing a null hypothesis, and the output window was limited. The author found vector databases and RAG/embeddings largely useless, and hallucination was rare unless there was no relevant information in the text.

The author concludes that while GPT-4 is useful, they do not believe it will lead to General AI. They suggest that the improvements in model performance are general rather than niche, and that the cost and speed of tokens are growing exponentially for incremental improvements. They anticipate that GPT-5 will offer incremental improvements rather than a significant leap forward.

Key takeaways:

  • When it comes to prompts for GPT, less is more. Over-specifying can confuse the model and produce less accurate results.
  • Improving latency with streaming API and showing users variable-speed typed words is a significant UX innovation with ChatGPT.
  • GPT struggles with producing the null hypothesis and often chooses to hallucinate rather than return nothing.
  • Despite its limitations, GPT is extremely reliable for use cases that involve analyzing, summarizing, or extracting information from a given block of text.
View Full Article

Comments (0)

Be the first to comment!