The company also considers if the AI content features a public figure or shows them engaging in sensitive behavior. YouTube gives the content uploader 48 hours to act on the complaint before initiating a review. If the content is not removed within this period, the company may remove the video and any personal information associated with it. The company has also introduced a tool that allows creators to disclose when content was made with altered or synthetic media and is testing a feature that allows users to add context to videos. Despite these changes, YouTube has warned that labeling AI content won't necessarily protect it from removal if it violates the platform's Community Guidelines.
Key takeaways:
- YouTube has implemented a policy change that allows individuals to request the removal of AI-generated or synthetic content that simulates their face or voice, under its privacy request process.
- The company will make its own judgment on the complaint based on various factors such as whether the content is disclosed as synthetic or AI-generated, whether it uniquely identifies a person, and whether it could be considered parody, satire, or something else of public interest.
- YouTube will give the content's uploader 48 hours to act on the complaint before initiating a review. If the content is not removed within this period, the company will start a review process.
- YouTube has clarified that receiving a privacy complaint will not automatically result in a Community Guidelines strike for the content creator, and that these are separate issues.