The article also provides a step-by-step guide on how to set up a customized content filtering configuration for your resource via Azure OpenAI Studio. It explains how to create a new configuration, modify the severity level for both prompts and completions for each content category, assign a configuration to one or more deployments, and edit or delete a configuration. The article concludes with best practices for configuring content filters and provides additional resources for learning more about Responsible AI practices for Azure OpenAI, content filtering categories and severity levels, and red teaming.
Key takeaways:
- Azure OpenAI Service provides content filtering system that uses multi-class classification models to detect harmful content in four categories: violence, hate, sexual, and self-harm.
- Content filters can be configured at resource level and can be associated with one or more deployments.
- The configurability feature allows customers to adjust the settings for prompts and completions, to filter content for each content category at different severity levels.
- Customers can set up a customized content filtering configuration for their resource using Azure OpenAI Studio.