However, the ban only applies to U.S. police and does not cover facial recognition performed with stationary cameras in controlled environments. The policy change aligns with Microsoft and OpenAI's recent approach to AI-related law enforcement and defense contracts. For instance, OpenAI is collaborating with the Pentagon on projects including cybersecurity capabilities, and Microsoft has proposed using OpenAI's image generation tool, DALL-E, to assist the Department of Defense in building software for military operations.
Key takeaways:
- Microsoft has updated its policy to prohibit U.S. police departments from using generative AI for facial recognition through the Azure OpenAI Service.
- The new terms also cover any law enforcement globally, explicitly barring the use of real-time facial recognition technology on mobile cameras in uncontrolled environments.
- The changes come after Axon, a tech and weapons products maker, announced a product that uses OpenAI’s GPT-4 generative text model to summarize audio from body cameras, raising concerns about potential racial biases and inaccuracies.
- Despite the ban, the new terms allow for some exceptions, such as the use of facial recognition with stationary cameras in controlled environments and usage by international police departments.