Solutions to this issue are not immediately clear. Adobe implements safeguards in its AI products and promotes an industry standard for provenance metadata through the CAI, but these measures are voluntary and can be bypassed. Watermarking is another potential solution, with companies like DeepMind, Imatag, and Steg.AI developing technologies to mark AI-generated images. However, the panelists stressed that generative AI companies need to ensure their content is trustworthy to prevent misuse and the propagation of misinformation.
Key takeaways:
- Generative AI tools are making it easier and cheaper to create and distribute disinformation on a large scale, posing a significant threat to democracy and shared truth.
- NewsGuard identified hundreds of unreliable, AI-generated websites, indicating that disinformation is becoming a volume game.
- Adobe's Content Authenticity Initiative and other organizations are exploring watermarking techniques and other safeguards to prevent misuse of generative AI, but their effectiveness is uncertain as their use is voluntary.
- Despite the challenges, there is optimism that economic incentives will encourage generative AI companies to ensure their content is reliable and prevent misuse of their tools.