Meta's President of Global Affairs, Nick Clegg, stated that the company will expand its existing practice of labeling its own AI-generated content to include images created by other services. The technology for labeling will rely on invisible watermarks and metadata embedded in files. However, Meta admits it will not be able to detect AI-generated content created without watermarks or metadata, such as images created with some open source AI image synthesis tools.
Key takeaways:
- Meta plans to start labeling AI-generated images from other companies like OpenAI and Google to enhance transparency on platforms such as Facebook, Instagram, and Threads.
- The initiative is part of a larger effort within the tech industry to establish standards for labeling content created using generative AI models, especially during the contentious US election year.
- Meta's technology for labeling AI-generated content will rely on invisible watermarks and metadata embedded in files, but it admits it will not be able to detect AI-generated content created without watermarks or metadata.
- Meta is also researching image watermarking technology called Stable Signature that it hopes can be embedded in open source image generators, acknowledging the challenge of AI content detection as open source AI tools become increasingly sophisticated and realistic.