Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Many AI researchers think fakes will become undetectable

Jan 20, 2024 - economist.com
The article discusses the challenges in detecting and watermarking AI-generated media. It highlights how AI has been used to create fake advertisements and explicit content, leading to concerns about the ability to distinguish real from fake. Despite efforts from tech giants and startups to develop detection software and watermarking techniques, these have not proven entirely reliable. The article cites studies showing that detection software often produces false positives and negatives, while watermarking can be defeated by image manipulation or additional noise.

The article also mentions that the AI community is pessimistic about the future of detection and watermarking, with a majority believing that AI-generated media will eventually become undetectable. Despite this, the US government has announced voluntary commitments with AI firms to boost investment in watermarking research. The article concludes by suggesting that while current safeguards are imperfect, they are better than none, but warns that the battle between the creators of fake content and those trying to detect it seems to be favoring the former.

Key takeaways:

  • AI-generated media is becoming increasingly difficult to detect, with many experts believing it will eventually become undetectable.
  • Detection software that aims to identify AI-generated media often produces false positives and negatives, and struggles to consistently identify machine-generated content.
  • Watermarking techniques are being developed to label AI-generated content in advance, but these methods are also not foolproof and can be defeated.
  • Despite the challenges, AI firms are boosting investment in watermarking research, as having imperfect safeguards is better than having none.
View Full Article

Comments (0)

Be the first to comment!