However, the technology has raised concerns about potential misuse, including voice cloning for malicious purposes. To mitigate this, OpenAI is implementing several safety measures, such as watermarking clones with inaudible identifiers and providing access to its red teaming network to identify potential malicious uses. The company is also testing a security mechanism that requires users to read randomly generated text to prove their presence and awareness of how their voice is being used.
Key takeaways:
- OpenAI has previewed its Voice Engine, a tool that can generate a synthetic copy of any voice from a 15-second sample, but there is no public release date yet.
- The Voice Engine is not trained on user data and does not offer controls to adjust the tone, pitch or cadence of a voice.
- OpenAI is being cautious about potential misuse of the technology, with initial access limited to around 100 developers and use cases that are low risk and socially beneficial.
- OpenAI is also testing a security mechanism that requires users to read randomly generated text as proof of their presence and awareness of how their voice is being used.