However, other researchers and industry professionals are skeptical about the effectiveness of watermarking. Hany Farid, a professor at the UC Berkeley School of Information, and Bars Juhasz, the cofounder of Undetectable, a startup devoted to helping people evade AI detectors, both argue that watermarking alone is not sufficient. They suggest that improving upon watermarking and using it in combination with other technologies could make it harder for bad actors to create convincing fakes.
Key takeaways:
- Current watermarking techniques for AI images are not reliable, according to a new study by Soheil Feizi and his coauthors at the University of Maryland.
- The study shows that it's easy for bad actors to remove watermarks and even add them to human-generated images, triggering false positives.
- Despite its flaws, some experts believe that watermarking can still play a role in AI detection if used in combination with other technologies.
- Feizi suggests that it might be necessary to accept that we won't be able to reliably flag AI-generated images, though his paper concludes that designing a robust watermark could be a challenging but not necessarily impossible task.