The team plans to integrate Nightshade into Glaze, another tool they developed that creates a "style cloak" to mask artists' images from AI. The tool will also be released on the open-source market. Other efforts to protect artists' work include watermarking IDs to identify AI-created images and legal action against companies that use copyrighted work for AI training. However, these measures do not prevent the initial data scraping used for AI training.
Key takeaways:
- A group of researchers led by Ben Zhao, a professor of computer science at the University of Chicago, has developed a tool called "Nightshade" that can poison AI models that use images to train, by subtly manipulating the image at the pixel level.
- The manipulated images, when used to train AI models, can cause the models to start misinterpreting prompts, effectively breaking down the model.
- Zhao's team also developed Glaze, a tool that can create a "style cloak" to mask artists' images and mislead AI art generators. Nightshade is set to be integrated into Glaze and also released on the open-source market.
- While these tools can't change existing models, they can potentially disrupt companies that actively use artists' work to train their AI, forcing them to manually find and remove poisoned images or reset training on the entire model.