The policy change could lead to more AI-generated content and manipulated media remaining on Meta’s platforms, as it shifts to an approach focused on “providing transparency and additional context”. The company will stop removing content solely based on its current manipulated video policy in July. The change is likely a response to rising legal demands on Meta around content moderation and systemic risk, such as the European Union’s Digital Services Act. The company is also likely considering the upcoming US presidential election and the potential for misleading content.
Key takeaways:
- Meta is changing its rules on AI-generated content and manipulated media, planning to label a wider range of such content, including deepfakes, with a “Made with AI” badge.
- The policy change could lead to more AI-generated content and manipulated media remaining on Meta’s platforms, as the company shifts its focus to providing transparency and additional context.
- Meta will stop removing content solely based on its current manipulated video policy in July, but will add informational labels and context in certain scenarios of high public interest.
- The changes come after criticism from Meta's Oversight Board, which argued that the company's existing approach was too narrow and risked restricting freedom of expression.