The author suggests starting with normal programming practices to understand the problem space and then using AI models only where necessary. This approach allows for more control over the models, improved privacy, and the ability to make continuous improvements. The author emphasizes that while AI can be powerful, it should be used sparingly and in conjunction with traditional coding methods for the best results.
Key takeaways:
- Large Language Models (LLMs) like ChatGPT, while versatile, can be expensive, slow, and lack differentiation, making them less ideal for building unique AI products.
- Instead of relying solely on LLMs, it's recommended to build your own toolchain combining normal code with specialized AI models where necessary.
- Training your own models allows for more control over improvements, privacy, and costs, and can result in a more efficient and differentiated product.
- When building AI products, it's advised to avoid using AI for as long as possible, and only incorporate it when standard coding doesn't solve a problem well.