The policy update will result in a strike that removes the content from a channel and temporarily limits the user's activities on the platform. A first strike will prevent users from uploading videos for a week, with penalties increasing for subsequent violations within 90 days, potentially leading to the removal of the entire channel. This comes as platforms like YouTube and TikTok have introduced AI-driven creation tools and new policies to manage synthetic content that could mislead users.
Key takeaways:
- YouTube is updating its policies on cyberbullying and harassment, banning content that realistically simulates victims of crimes, including minors, narrating their experiences of violence or death.
- The policy change targets a genre of true crime content that uses AI to create disturbing depictions of victims, including children, describing the violence they suffered.
- Violating the updated policy will result in a strike that removes the content and temporarily limits the user's activities on the platform, with penalties increasing for repeated violations within 90 days, potentially leading to the removal of the entire channel.
- Other platforms, including TikTok, have also introduced new policies around synthetic content created using AI, requiring creators to label such content, while YouTube has announced a strict policy around AI voice clones of musicians.