Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Meet Nightshade, the new tool allowing artists to ‘poison’ AI models with corrupted training data

Oct 25, 2023 - venturebeat.com
The University of Chicago researchers have developed a tool called Nightshade that allows artists to "poison" their digital artwork to prevent AI models from training on it. The tool alters pixels in a way that is invisible to the human eye but confuses AI models, causing them to learn incorrect names for objects and scenery. For instance, the researchers were able to trick an AI model into identifying images of dogs as cats after training on just 50 poisoned images. The tool is designed to protect artists' copyright and intellectual property rights against AI companies.

The researchers tested Nightshade using Stable Diffusion, an open-source text-to-image generation model, and found that it was able to trick the model into returning cats when prompted with words like "husky," "puppy," and "wolf." The poisoning technique is difficult to defend against as it requires AI developers to detect and remove any images with poisoned pixels, which are not obvious to the human eye. The researchers hope that Nightshade will help tip the power balance back towards artists and against AI companies.

Key takeaways:

  • Nightshade, an open source tool developed by University of Chicago researchers, can 'poison' digital artwork to mislead AI models that try to learn from it.
  • The tool alters pixels in a way that is invisible to the human eye but can cause AI models to learn incorrect names for objects and scenery in the images.
  • After being exposed to a number of 'poisoned' images, AI models began to generate incorrect or distorted images based on the misleading information.
  • The researchers hope that Nightshade will help to restore power to artists and content creators by acting as a deterrent against the misuse of their work by AI companies.
View Full Article

Comments (0)

Be the first to comment!