However, the new terms leave some room for Microsoft. The complete ban on Azure OpenAI Service usage only applies to U.S. police, not international police, and doesn't cover facial recognition performed with stationary cameras in controlled environments. This aligns with Microsoft’s and OpenAI’s recent approach to AI-related law enforcement and defense contracts. For instance, OpenAI is reportedly working with the Pentagon on several projects, including cybersecurity capabilities, and Microsoft has proposed using OpenAI’s image generation tool, DALL-E, to assist the Department of Defense in building software for military operations.
Key takeaways:
- Microsoft has updated its policy to prohibit U.S. police departments from using generative AI through the Azure OpenAI Service, including integrations with OpenAI’s text- and speech-analyzing models.
- The new terms also explicitly ban the use of “real-time facial recognition technology” on mobile cameras by any law enforcement globally.
- The changes come a week after Axon, a tech and weapons products maker, launched a product that uses OpenAI’s GPT-4 generative text model to summarize audio from body cameras, raising concerns about potential racial biases and inaccuracies.
- Despite the ban, the new terms allow for the use of Azure OpenAI Service by international police and for facial recognition performed with stationary cameras in controlled environments.