OpenAI spokesperson Taya Christianson stated that the company uses a combination of automated systems, human review, and user reports to find and assess policy-violating tools. However, the presence of these tools suggests that OpenAI's content moderation efforts are insufficient. As the company plans to introduce a revenue-sharing model for developers based on the usage of their tools, it will need to improve its content moderation to prevent misuse of its technology.
Key takeaways:
- OpenAI's marketplace, which allows developers to create and sell custom versions of its ChatGPT technology, has been found to host numerous tools that violate the company's policies, including AI-generated porn, tools for cheating in academics, and bots offering medical and legal advice.
- Despite OpenAI's claim of having systems in place to monitor for policy violations, many of these offending tools are not only available but also promoted on the marketplace's homepage.
- OpenAI has removed some of the violating tools after being alerted by Gizmodo, but many others remain available. The company claims to use a combination of automated systems, human review, and user reports to identify and assess policy-violating tools.
- Experts suggest that OpenAI will need to engage in difficult content moderation decisions that can't be solved by simple automated means, especially if it plans to introduce a revenue-sharing model that compensates developers based on the usage of their tools.