Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

GPT-3.5 Turbo fine-tuning and API updates

Aug 22, 2023 - openai.com
OpenAI has announced the availability of fine-tuning for GPT-3.5 Turbo, with GPT-4 fine-tuning coming this fall. This update allows developers to customize models for better performance and scalability. Early tests have shown that a fine-tuned GPT-3.5 Turbo can match or outperform base GPT-4 capabilities on certain tasks. Fine-tuning has been used to improve steerability, reliable output formatting, and custom tone. It also allows businesses to shorten their prompts while maintaining performance. Fine-tuning with GPT-3.5 Turbo can handle 4k tokens, double the previous models.

The fine-tuning process involves creating a fine-tuning job and using the fine-tuned model, with a fine-tuning UI to be launched soon. Safety is a priority, with fine-tuning training data passed through a Moderation API and a GPT-4 powered moderation system. The cost of fine-tuning is split into training and usage costs. OpenAI has also announced the availability of `babbage-002` and `davinci-002` as replacements for the original GPT-3 base models, which can be fine-tuned using a new API endpoint. The old endpoint will be turned off on January 4th, 2024.

Key takeaways:

  • Fine-tuning for GPT-3.5 Turbo is now available, allowing developers to customize models for better performance in their specific use cases. Fine-tuning for GPT-4 is expected to be available this fall.
  • Use cases for fine-tuning include improved steerability, reliable output formatting, and custom tone. Fine-tuning also allows for shorter prompts and can handle 4k tokens.
  • Fine-tuning costs are divided into initial training cost and usage cost, with specific rates per 1K tokens for training, usage input, and usage output.
  • New GPT-3 models, `babbage-002` and `davinci-002`, are now available as replacements for the original GPT-3 base models. The old `/v1/fine-tunes` endpoint will be deprecated and turned off on January 4th, 2024.
View Full Article

Comments (0)

Be the first to comment!