Nightshade exploits a security vulnerability in generative AI models, causing them to malfunction when they scrape poisoned images from the internet. The poisoned data is hard to remove, requiring each corrupted sample to be found and deleted individually. The tool has been tested on several AI models, with successful results. However, there are concerns that the data poisoning technique could be used maliciously. Despite this, researchers believe that the tool could help to make AI companies respect artists' rights more.
Key takeaways:
- A new tool called Nightshade allows artists to add invisible changes to their artwork that can disrupt AI models if the art is used without permission. This is intended to deter AI companies from using artists' work without consent.
- Nightshade exploits a security vulnerability in generative AI models, causing them to malfunction when they incorporate the altered images into their training data. This can result in bizarre outputs, like dogs appearing as cats or cars as cows.
- The team behind Nightshade also developed Glaze, a tool that allows artists to mask their personal style to prevent it from being scraped by AI companies. The team plans to integrate Nightshade into Glaze and make it open source.
- While there are concerns that the data poisoning technique could be used maliciously, it would require thousands of poisoned samples to significantly impact larger models. Researchers are calling for work on defenses against such attacks.