The author further breaks down the product problem into two aspects: the product design communicating certainty when the model is inherently uncertain, and the product not indicating what kind of questions can be asked. The author suggests three approaches to tackle these issues: creating a general-purpose chatbot, narrowing down the product to a specific domain with a custom UI, or abstracting the input and output as functions within another thing. The author concludes by stating that while LLMs are a general-purpose technology, the best way to deploy them might be to unbundle them into single-purpose tools and experiences.
Key takeaways:
- Generative AI models like ChatGPT are not databases and do not produce precise factual answers, they are probabilistic systems and cannot guarantee completely accurate answers.
- There are two ways to solve the problem of AI inaccuracies: treating it as a science problem where models will get better over time, or treating it as a product problem where we build useful products around models that may get things wrong.
- Product design can mislead users by communicating certainty when the model is inherently uncertain. The product should communicate what it can and can't do and what good questions might be.
- There are different approaches to using AI: narrowing the product to a specific domain with a custom UI, or abstracting the input and output as functions inside some other thing. The latter approach allows the user to not know that this is generative AI at all.