The move comes after criticism of the impact of such content on minors, and the use of AI technology by cybercriminals for malicious purposes. The changes follow the introduction of new policies and tools two months ago, aimed at responsible disclosure of AI content and the removal of deepfakes. Users will now be required to disclose if they have produced or altered content to make it appear realistic, with failure to do so resulting in potential suspension or removal from the app's Partner Program.
Key takeaways:
- YouTube is intensifying its crackdown on AI-generated content that simulates harmful events involving minors, with stricter policies and penalties.
- Google, YouTube's parent company, has faced criticism for how AI content can have harmful effects on minors who may be targeted by cybercriminals using AI technology.
- The new policies will be implemented from the middle of this month, and users who violate these policies may have their content removed or their channels deleted.
- The changes come two months after YouTube began rolling out new policies related to responsible disclosures about AI content, including tools to delete deepfakes.