Experts suggest this could be a silent move by OpenAI to weaken its stance against doing business with militaries. The real-world implications of this policy change are unclear, but it comes at a time when militaries worldwide are eager to incorporate machine learning techniques. Despite concerns about the accuracy and security risks of large language models (LLMs) like OpenAI's ChatGPT, the Pentagon remains interested in adopting AI tools.
Key takeaways:
- OpenAI has removed language from its usage policy that previously prohibited the use of its technology for military purposes, including weapons development and military warfare.
- The company claims the changes were made to make the policy clearer and more readable, but experts suggest it may be silently weakening its stance against doing business with militaries.
- Despite the changes, OpenAI's new policy still prohibits the use of its technology to develop or use weapons, injure others, or engage in unauthorized activities that violate the security of any service or system.
- The changes come as militaries worldwide are eager to incorporate machine learning techniques, and the Pentagon is exploring how it might use large-language models like ChatGPT, despite concerns about their tendency to insert factual errors or other distortions.