Additionally, OpenAI announced updates to its GPT-4o and GPT-4o mini models within the Realtime API, which supports low-latency, AI-generated voice responses. These models are more data-efficient, reliable, and cost-effective. The Realtime API, still in beta, now includes features like concurrent out-of-band responses and WebRTC support for real-time voice applications. OpenAI also introduced preference fine-tuning to its fine-tuning API and launched an early access beta for software developer kits in Go and Java.
Key takeaways:
```html
- OpenAI is rolling out its "reasoning" AI model, o1, to select developers in the "tier 5" usage category, requiring a minimum spend and account age.
- O1 offers advanced features like function calling and image analysis, and includes a "reasoning_effort" parameter for controlling response time.
- OpenAI introduced new versions of its GPT-4o models in the Realtime API, which now supports WebRTC for real-time voice applications.
- OpenAI launched preference fine-tuning in its fine-tuning API and released early access beta software developer kits in Go and Java.