In response to slow government action, industry alternatives such as Content Credentials (CR) have emerged. CR, developed by a coalition including Microsoft, Adobe, and the BBC, attaches additional information about an image whenever it is exported or downloaded. This information can be checked against provenance claims made in the manifest, providing a unique authentication method. However, the standard is still in its early stages and is not yet widely adopted. Meanwhile, researchers from the University of Chicago’s SAND Lab have developed Glaze and Nightshade, systems designed to protect against generative AIs by disrupting their style of mimicry or corrupting their training databases.
Key takeaways:
- The Biden White House has enacted an executive order to establish a framework for generative artificial intelligence development, including content authentication and using digital watermarks to indicate when digital assets made by the Federal government are computer generated.
- Modern digital watermarking embeds added information onto a piece of content using special encoding software, providing a record of where the content originated or who the copyright holder is.
- Content Credentials (CR) is a system that attaches additional information about an image whenever it is exported or downloaded in the form of a cryptographically secure manifest, providing a unique authentication method that cannot be easily stripped.
- Teams from the University of Chicago’s SAND Lab have developed Glaze and Nightshade, two copy protection systems for use specifically against generative AIs. Glaze disrupts a generative AI’s style of mimicry, while Nightshade subtly changes the pixels in a given image to corrupt the training database its ingested into.