Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Data poisoning: how artists are sabotaging AI to take revenge on image generators

Dec 18, 2023 - theconversation.com
The article discusses the concept of 'data poisoning' in text-to-image generators, where the training data pool is 'poisoned' by subtly altered images that confuse AI models but appear normal to human eyes. This can lead to unpredictable and incorrect results from the generator. The tool 'Nightshade' has been developed to create such 'poisoned' images as a response to unauthorized image scraping and copyright infringement by big tech companies.

The article also explores potential solutions to data poisoning, such as paying more attention to the source and usage of input data, ensemble modeling, and audits. It also discusses the wider implications of adversarial approaches, such as data poisoning, in the context of technological governance and the rights of artists and users.

Key takeaways:

  • Data poisoning is a technique where an image's pixels are subtly altered to disrupt AI training, resulting in unpredictable and unintended results from text-to-image generators.
  • The tool 'Nightshade' has been developed to empower artists and fight back against unauthorised image scraping by 'poisoning' the images.
  • Proposed solutions to data poisoning include greater attention to data sources, ensemble modeling to detect outliers, and audits using a 'test battery' of hold-out data.
  • Data poisoning is seen by some as an innovative solution to protect the rights of artists and users against the indiscriminate use of online data by tech companies.
View Full Article

Comments (0)

Be the first to comment!