The article further reveals that while all AI developers were contacted for their perspective, only OpenAI responded. The company's spokesperson indirectly acknowledged the issue of data poisoning and stated that they are constantly enhancing their safety measures and seeking ways to make their systems more robust against such "abuse". This suggests that tools like Glaze and Nightshade may indeed be effective. The article concludes by encouraging readers to join various platforms and social media channels for more updates and discussions on the topic.
Key takeaways:
- AI developers like OpenAI, Midjourney, Stability AI, and others are known to scrape millions of images from the internet to train their generative models, often without the consent of the original creators.
- Countermeasures like Glaze and Nightshade have been released to prevent artwork from being used for AI training, but their effectiveness is uncertain.
- An article in the MIT Technology Review discusses the conflict between artists and AI, the strategies artists use to make their artwork unusable for AI training, and the history and goals of Glaze.
- OpenAI responded to the article, stating that it is constantly working on enhancing its safety measures and is always looking for ways to make its systems more robust against what it views as 'abuse' from artists using tools like Glaze and Nightshade.