Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Ask HN: What are some practical ways you are using fine tuned LLMs?

Aug 09, 2023 - news.ycombinator.com
The user PaulHoule shared his experience of using BERT models for a content-based recommender system, stating that they required more than 50 times the effort to train compared to an embedding-based model that performed similarly. He also mentioned that fine-tuned BERT was unsuccessful in predicting whether a headline would receive upvotes on Hacker News or what the comments/votes ratios would be.

PaulHoule found these results unsurprising due to the noisy nature of the recommender and the unpredictability of user behavior on Hacker News. He revealed that his best model for these tasks remains a bag-of-words model, suggesting its superior efficiency and effectiveness in this context.

Key takeaways:

  • PaulHoule tried fine-tuning BERT models for a content-based recommender system but found they required more than 50x the effort to train than an embedding based model.
  • The performance of the BERT models and the embedding based model was approximately the same.
  • Fine-tuned BERT was unsuccessful at predicting if a headline would get upvoted on HN or what the comments/votes ratios would be.
  • The best model for predicting upvotes and comments/votes ratios on HN, according to PaulHoule, is still a bag-of-words model.
View Full Article

Comments (0)

Be the first to comment!