Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

OpenAI’s flawed plan to flag deepfakes ahead of 2024 elections

May 08, 2024 - arstechnica.com
OpenAI has developed an AI image detection classifier that can detect about 98% of AI outputs from its own image generator, DALL-E 3, and flags 5-10% of images generated by other AI models. The tool provides a binary response indicating the likelihood of the image being AI-generated and can also display a content summary confirming that the content was generated with an AI tool. To develop this tool, OpenAI added tamper-resistant metadata to all images created and edited by DALL-E 3, which the detector reads to accurately flag the images as fake.

However, the solution is not comprehensive as the metadata could be removed and deceptive content can still be created without this information. Despite this, OpenAI believes that the metadata is an important resource to build trust and is committed to the Coalition for Content Provenance and Authenticity (C2PA) standard for digital content certification. OpenAI has joined the C2PA steering committee and will launch a $2 million fund with Microsoft to support broader AI education and understanding.

Key takeaways:

  • OpenAI has developed an AI image detection classifier that can detect about 98 percent of AI outputs from its own image generator, DALL-E 3, and flags 5 to 10 percent of images generated by other AI models.
  • The tool uses tamper-resistant metadata to accurately flag AI-generated images, following a standard set by the Coalition for Content Provenance and Authenticity (C2PA).
  • OpenAI has joined the C2PA steering committee to help drive broader adoption of the standard and will also launch a $2 million fund with Microsoft to support broader AI education and understanding.
  • Despite these efforts, the solution is not comprehensive as the metadata could always be removed, and people can still create deceptive content without this information.
View Full Article

Comments (0)

Be the first to comment!