Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

New models and developer products announced at DevDay

Nov 06, 2023 - openai.com
OpenAI has announced several new additions and improvements to its platform, including the new GPT-4 Turbo model, Assistants API, and multimodal capabilities. The GPT-4 Turbo model is more capable, cheaper, and supports a 128K context window. The Assistants API simplifies the process for developers to build AI apps that can call models and tools. The multimodal capabilities include vision, image creation (DALL·E 3), and text-to-speech (TTS). These features will be rolled out to OpenAI customers starting today.

The company has also reduced pricing across many parts of the platform. Other updates include function calling updates, improved instruction following and JSON mode, reproducible outputs and log probabilities, and an updated GPT-3.5 Turbo. OpenAI is also releasing the Assistants API, which helps developers build agent-like experiences within their own applications. New modalities in the API include GPT-4 Turbo with vision, DALL·E 3, and text-to-speech (TTS). The company is also launching a Custom Models program and an experimental access program for GPT-4 fine-tuning.

Key takeaways:

  • OpenAI has announced several new additions and improvements, including a new GPT-4 Turbo model, Assistants API, and new multimodal capabilities in the platform such as vision, image creation, and text-to-speech.
  • The new GPT-4 Turbo model is more capable, cheaper, and supports a 128K context window. It also has knowledge of world events up to April 2023 and is available for all paying developers to try.
  • The Assistants API is designed to help developers build agent-like experiences within their own applications. It provides new capabilities such as Code Interpreter and Retrieval as well as function calling.
  • OpenAI is also introducing new modalities in the API, including GPT-4 Turbo with vision, DALL·E 3 for image generation, and a text-to-speech API for generating human-quality speech from text.
View Full Article

Comments (0)

Be the first to comment!