Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Way Enough - Fine-tuning gpt-3.5-turbo to learn to play "Connections"

Jan 15, 2024 - danielcorin.com
The author attempted to fine-tune an OpenAI language model to solve the NYTimes word game "Connections". The game involves grouping 16 words into 4 categories of 4 words each. The author collected past game solutions using a script and created a dataset for fine-tuning the model. However, the initial results were not satisfactory due to errors in the dataset. After correcting these errors and re-running the fine-tuning process, the model showed slight improvement, but the results were still not impressive. The author concluded that while the experience was a valuable learning opportunity, the model did not significantly improve at the game after fine-tuning.

The author also tested the validation set with the gpt-4 model, which showed a notable improvement over the gpt-3.5 model. The author has requested access to fine-tune the gpt-4 model for future work.

Key takeaways:

  • The author attempted to fine-tune an OpenAI language model to solve the NYTimes word game "Connections".
  • The initial results were not very successful, with the model struggling to correctly group words and identify categories, despite the author's efforts in data preparation and prompt engineering.
  • After fixing some bugs in the dataset and adjusting the model, the fine-tuned model showed a slight improvement in performance, but it was not significantly better than the original model.
  • The author concluded that while the results were not as expected, the process was a valuable learning experience and expressed interest in exploring fine-tuning with the gpt-4 model in the future.
View Full Article

Comments (0)

Be the first to comment!