Despite removing the explicit ban on military and warfare use, OpenAI maintains its prohibition on using its models to develop weapons to harm people. The company is also taking steps to prevent its AI tools from being used to spread election-related disinformation. This move aligns with a similar initiative by Microsoft, OpenAI's largest investor, which announced a five-step election protection strategy in November.
Key takeaways:
- OpenAI is developing AI-powered cybersecurity capabilities for the US military and shifting its election security work into high gear, according to the lab's executives at the World Economic Forum.
- The company has removed its previous policy language that prohibited the use of its generative AI models for military and warfare applications and the generation of malware.
- Despite this change, OpenAI maintains its ban on using its models to develop weapons to hurt people and insists its tools should not be used for violence, destruction, or communications espionage.
- OpenAI is also in discussions with the US government on how its technology can help prevent veteran suicides and is taking steps to ensure its generative AI tools aren't used to spread election-related disinformation.