Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Ask HN: People who switched from GPT to their own models. How was it?

Feb 27, 2024 - news.ycombinator.com
The article is a discussion thread on Hacker News about users' experiences switching from GPT (Generative Pre-training Transformer) to their own models. The original post asks for insights from those who have made the switch in a production use case.

One user, iAkashPaul, shares a positive experience using Mistral-Instruct-0.1 for call/email summarization, Mixtral for contract mining, and OpenChat to augment a chatbot equipped with RAG tools. They mention that the INT8 tradeoffs are acceptable until more advanced hardware becomes widely available. Another user, ParetoOptimal, mentions that they no longer use llms for personal use. Other users ask about the base model used and the methodologies applied.

Key takeaways:

  • The discussion is about the experience of switching from GPT to custom models for production use cases.
  • One user found that evaluating Language Models (LLMs) is surprisingly difficult and that GPT 4 isn't that great in general.
  • Another user shared their experience of running different models for various tasks such as call/email summarization, contract mining, and chatbot augmentation. They found the experience great and the tradeoffs acceptable.
  • Another user mentioned that they no longer use LLMs for personal use.
View Full Article

Comments (0)

Be the first to comment!