Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

OpenAI releases ChatGPT's hyperrealistic voice to some paying users | TechCrunch

Jul 31, 2024 - news.bensbites.com
OpenAI has started rolling out the Advanced Voice Mode for ChatGPT, featuring GPT-4o's hyperrealistic audio responses. The alpha version will initially be available to a select group of ChatGPT Plus users, with a broader rollout planned for fall 2024. The voice feature, which was first showcased in May, drew attention for its striking resemblance to human speech, particularly that of actress Scarlett Johansson. Following legal concerns raised by Johansson, OpenAI removed the voice from its demo and denied using the actress's voice.

The Advanced Voice Mode is different from the current Voice Mode, as it uses GPT-4o's multimodal capabilities to process tasks without auxiliary models, resulting in lower latency conversations. The new feature can also sense emotional intonations in the user's voice. OpenAI plans to limit the voice mode to four preset voices - Juniper, Breeze, Cove, and Ember, developed with paid voice actors. The company is also implementing filters to block requests to generate music or other copyrighted audio to avoid potential legal issues.

Key takeaways:

  • OpenAI has begun rolling out Advanced Voice Mode for ChatGPT, featuring GPT-4o’s hyperrealistic audio responses, to a small group of ChatGPT Plus users.
  • The voice feature of GPT-4o, which was initially compared to Scarlett Johansson's voice, will now be limited to four preset voices - Juniper, Breeze, Cove, and Ember, created with paid voice actors.
  • OpenAI is releasing the new voice feature gradually to monitor its usage and has tested it with over 100 external red teamers who speak 45 different languages.
  • To avoid deepfake controversies and copyright infringement issues, OpenAI has introduced new filters to block certain requests to generate music or other copyrighted audio.
View Full Article

Comments (0)

Be the first to comment!