The watermarking initiative is part of a new standard developed by the Coalition for Content Provenance and Authenticity (C2PA), led by Adobe and partnered with companies like arm, the BBC, Intel, Microsoft, the New York Times, and X/Twitter. Meta also plans to add its own tags to AI-generated images, but it's unclear how it will integrate the C2PA standard. Despite these efforts, users are cautioned not to assume safety even if an image passes through the AI check system, Content Credentials Verify, developed with C2PA.
Key takeaways:
- OpenAI is adding watermarks to images generated by its AI tools to combat the misuse of deepfake technology. The watermark will include details about the image's origin in the metadata.
- However, the effectiveness of this solution is questionable as the metadata watermark can be easily removed by taking a screenshot or uploading the image to social media platforms, which often automatically remove metadata.
- Despite these limitations, OpenAI believes that this method can help increase the trustworthiness of digital information and is a step towards establishing provenance.
- The watermarking standard is being adopted from the Coalition for Content Provenance and Authenticity (C2PA), an initiative led by Adobe and partnered with several companies including the BBC, Intel, Microsoft, the New York Times, and X/Twitter.