In addition to images, Meta is also changing its policy to require users who post AI-generated videos or audio to inform the company that the content is synthetic. If users fail to disclose this, they could face penalties under Meta's existing Community Standards. The company is also exploring the use of large language models (LLMs) to help enforce its policies and remove harmful content. However, the effectiveness of these measures remains unclear without data on the prevalence of synthetic content and the success rate of Meta's detection systems.
Key takeaways:
- Meta is expanding the labelling of AI-generated imagery on its platforms, including those created using rivals’ generative AI tools, provided they use “industry standard indicators” that the content is AI-generated.
- Meta's approach to labelling AI generated imagery relies upon detection powered by both visible marks and “invisible watermarks” and metadata embedded in synthetic images by its generative AI tech.
- Meta is changing its policy to require users who post “photorealistic” AI generated video or “realistic-sounding” audio to inform it that the content is synthetic, with penalties for those who fail to disclose.
- Meta is also exploring the use of large language models (LLMs) to support its enforcement efforts during moments of “heightened risk”, such as elections, and to aid in content moderation.