Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Researchers Tested AI Watermarks—and Broke All of Them

Oct 03, 2023 - wired.com
The current state of watermarking AI images is unreliable, according to a new study by Soheil Feizi, a computer science professor at the University of Maryland. The study found that it is easy for bad actors to evade watermarking attempts, and even add watermarks to human-generated images, triggering false positives. Despite this, tech giants continue to develop watermarking technology to combat misinformation, with Google's DeepMind releasing a beta version of its new watermarking tool, SynthID.

However, other researchers and industry professionals are skeptical about the effectiveness of watermarking. Hany Farid, a professor at the UC Berkeley School of Information, and Bars Juhasz, the cofounder of Undetectable, a startup devoted to helping people evade AI detectors, both argue that watermarking alone is not sufficient. They suggest that improving upon watermarking and using it in combination with other technologies could make it harder for bad actors to create convincing fakes.

Key takeaways:

  • Current watermarking techniques for AI images are not reliable, according to a new study by Soheil Feizi and his coauthors at the University of Maryland.
  • The study shows that it's easy for bad actors to remove watermarks and even add them to human-generated images, triggering false positives.
  • Despite its flaws, some experts believe that watermarking can still play a role in AI detection if used in combination with other technologies.
  • Feizi suggests that it might be necessary to accept that we won't be able to reliably flag AI-generated images, though his paper concludes that designing a robust watermark could be a challenging but not necessarily impossible task.
View Full Article

Comments (0)

Be the first to comment!