Despite being a major platform for children's entertainment, YouTube has struggled to moderate content aimed at kids and has been criticized for hosting inappropriate content. The exemption for animation in the new policy means parents cannot easily filter out AI-generated videos. Some AI-generated content targeting kids does require flagging under the new rules, particularly those that push pseudoscience and conspiracy theories. However, many parents still rely on the main YouTube app to find content for their children, making it difficult to avoid low-quality, AI-generated content.
Key takeaways:
- YouTube has updated its rules to require disclosure of certain uses of synthetic media, including generative AI, in uploaded videos. This is to ensure viewers are aware if what they're seeing isn't real.
- The new policy does not apply to AI-generated animations aimed at children, meaning creators can continue to produce such content without disclosing their methods.
- Creators also don't need to flag use of AI for minor edits that are primarily aesthetic, or for generating or improving scripts or captions.
- Despite the new rules, YouTube has faced criticism for struggling to moderate the vast amount of content aimed at children, with concerns raised about the quality and suitability of AI-generated videos.