In a submission to the House of Lords communications and digital select committee, OpenAI also expressed support for independent analysis of its security measures, including "red-teaming" where third-party researchers test the safety of a product. The company is among those that have agreed to work with governments on safety testing their most powerful models before and after their deployment.
Key takeaways:
- OpenAI, the developer of AI tools like ChatGPT, has stated that it would be impossible to create such tools without access to copyrighted material, which is used to train these AI models.
- The company is facing increasing legal pressure, including a lawsuit from the New York Times accusing it of "unlawful use" of copyrighted work to create its products.
- In a submission to the House of Lords communications and digital select committee, OpenAI argued that limiting training materials to out-of-copyright books and drawings would result in inadequate AI systems.
- OpenAI supports independent analysis of its security measures, including "red-teaming" where third-party researchers test the safety of a product by emulating the behaviour of rogue actors.