Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Can digital watermarking protect us from generative AI?

Nov 30, 2023 - engadget.com
The Biden administration recently issued an executive order to guide the development of generative artificial intelligence (AI), including content authentication and the use of digital watermarks for federal government-generated digital assets. The order aims to help content creators authenticate their online works securely amid the rise of AI-generated misinformation. However, critics argue that the order lacks technical details on how it will achieve its goals and that companies have no legal or regulatory pressure to watermark their AI output.

In response to slow government action, industry alternatives such as Content Credentials (CR) have emerged. CR, developed by a coalition including Microsoft, Adobe, and the BBC, attaches additional information about an image whenever it is exported or downloaded. This information can be checked against provenance claims made in the manifest, providing a unique authentication method. However, the standard is still in its early stages and is not yet widely adopted. Meanwhile, researchers from the University of Chicago’s SAND Lab have developed Glaze and Nightshade, systems designed to protect against generative AIs by disrupting their style of mimicry or corrupting their training databases.

Key takeaways:

  • The Biden White House has enacted an executive order to establish a framework for generative artificial intelligence development, including content authentication and using digital watermarks to indicate when digital assets made by the Federal government are computer generated.
  • Modern digital watermarking embeds added information onto a piece of content using special encoding software, providing a record of where the content originated or who the copyright holder is.
  • Content Credentials (CR) is a system that attaches additional information about an image whenever it is exported or downloaded in the form of a cryptographically secure manifest, providing a unique authentication method that cannot be easily stripped.
  • Teams from the University of Chicago’s SAND Lab have developed Glaze and Nightshade, two copy protection systems for use specifically against generative AIs. Glaze disrupts a generative AI’s style of mimicry, while Nightshade subtly changes the pixels in a given image to corrupt the training database its ingested into.
View Full Article

Comments (0)

Be the first to comment!