Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

OpenAI's newest model is GPT-4o | TechCrunch

May 13, 2024 - techcrunch.com
OpenAI is launching a new generative AI model, GPT-4o, which will be integrated into the company's products over the coming weeks. The model enhances the capabilities of its predecessor, GPT-4, by improving text, vision, and audio functionalities. GPT-4o can reason across voice, text, and vision, making it a significant step towards the future of human-machine interaction. The model enhances the user experience with ChatGPT, OpenAI's AI-powered chatbot, by allowing real-time interaction and emotional recognition in the user's voice.

GPT-4o also enhances ChatGPT's vision capabilities, enabling it to answer questions related to images or desktop screens quickly. The model is more multilingual, with improved performance in 50 languages. In OpenAI's API, GPT-4o is twice as fast as GPT-4, half the price, and has higher rate limits. OpenAI is also releasing a desktop version of ChatGPT with a refreshed UI.

Key takeaways:

  • OpenAI is launching a new AI model, GPT-4o, which improves on the capabilities of GPT-4 across text, vision, and audio, and will be rolled out across the company's products over the next few weeks.
  • GPT-4o enhances the ChatGPT experience, allowing users to interact with the chatbot more like an assistant, with real-time responsiveness and the ability to pick up on the emotion in a user's voice.
  • The new model also improves ChatGPT's vision capabilities, enabling it to quickly answer questions related to a given photo or desktop screen.
  • GPT-4o is more multilingual, with improved performance in 50 different languages, and in OpenAI's API, it is twice as fast as GPT-4, half the price, and has higher rate limits.
View Full Article

Comments (0)

Be the first to comment!