The demand for data retention stems from a broader legal context, including a case with the Authors Guild, where news organizations, including The New York Times, requested the preservation of user logs to support their claims of copyright violations. OpenAI contends that this requirement could undermine user trust by potentially exposing sensitive information. The company argues that the vast amount of retained data makes it difficult for publishers to find evidence of alleged copyright violations, while the publishers believe it could strengthen their case.
Key takeaways:
- OpenAI is appealing a court order to retain output logs as part of a lawsuit with The New York Times, which claims OpenAI used its copyrighted works to train models.
- The order affects users with ChatGPT Free, Plus, Pro, and Team subscriptions, as well as OpenAI API users without a Zero Data Retention agreement, but not ChatGPT Enterprise and Edu users.
- The demand for data retention is linked to a previous case with the Authors Guild and aims to find evidence of alleged copyright violations.
- OpenAI argues that the data retention requirement undermines user trust and privacy, as it involves sensitive personal and proprietary information.