The draft guidelines suggest that platforms should clearly label AI-generated content and provide users with tools to do the same. They also recommend the use of watermarking to distinguish AI-generated content. Platforms are encouraged to adapt their content moderation systems to detect watermarks and other content provenance indicators. The guidelines, which are under public consultation until March 7, also suggest that platforms should make efforts to ensure information provided using generative AI relies on reliable sources, and that they should warn users of potential errors in AI-generated content.
Key takeaways:
- The European Union has initiated a consultation on draft election security guidelines aimed at large online platforms like Facebook, Google, TikTok, and Twitter. The guidelines aim to mitigate democratic risks from generative AI and deepfakes, among other issues.
- The guidelines are targeted at nearly two dozen platform giants and search engines currently designated under the EU's Digital Services Act (DSA). The DSA applies to platforms with 45M+ monthly active users in the region.
- The draft guidelines recommend that tech giants put in place "reasonable, proportionate, and effective" mitigation measures against the creation and large-scale dissemination of AI-generated fakes. They also suggest platforms should provide users with tools to label AI-generated content.
- The draft guidelines are under public consultation in the EU until March 7. The final guidelines are expected to be available before the European Parliament elections in early June.