The author argues that the current AI systems are essentially black boxes that cannot reliably track the provenance of generative text or images, making it difficult to prevent copyright infringement. He suggests that a good system should provide a manifest of sources, which current systems do not. The author predicts that the New York Times lawsuit is likely the first of many, with potential settlements reaching into the billions. He also notes that Microsoft could be implicated as the experiments were conducted using Bing with Dall-E. The article concludes with a call to share the post to raise awareness about the issue among artists.
Key takeaways:
- Generative AI systems like DALL-E and ChatGPT, developed by OpenAI, have been found to reproduce copyrighted materials, posing potential infringement issues.
- These AI systems do not inform users when they produce copyrighted materials, nor do they provide any information about the provenance of the images they produce.
- Current AI systems are unable to give attribution to source materials, and until a new architecture is developed that can reliably track the provenance of generative text and images, infringement issues will likely continue.
- The New York Times lawsuit against OpenAI is likely just the first of many, with potential settlements expected to be in the range of $100 million or more, posing significant financial risks for OpenAI and other companies involved, like Microsoft.