Microsoft also announced new additions to its Phi-3 family of AI small language models (SMLs), including Phi-3-small, Phi-3-medium, and Phi-3-vision. These models are optimized for resource-constrained environments and support general visual reasoning tasks. The company has integrated Phi-3-mini into Azure AI’s Models-as-aService (MaaS) service and is bringing new capabilities across APIs to support multimodal experiences. New features are also being added to Azure AI Speech, including speech analytics and universal translation.
Key takeaways:
- Microsoft's Azure AI Studio has shipped to broad availability, enabling developers to build custom Copilot apps.
- OpenAI's GPT-4o model is now available as an API in Azure AI Studio, allowing developers to integrate text, image, and audio processing into a single model.
- Microsoft has announced Phi-3-small, Phi-3-medium, and Phi-3-vision, new multimodal models as part of its Phi-3 family of AI small language models (SLMs).
- New features are being shipped to Azure AI Speech in preview, including speech analytics and universal translation to help developers build high-quality and voice-enabled apps.