Tech companies and AI creators are attempting to discourage malicious use and promote ethical tools. For instance, Hugging Face is developing image "guarding" and Civitai has policies against depictions of real people in mature contexts. However, these measures are not foolproof. The article suggests that community norms, legislation, and collaboration between various stakeholders could help deter abuse. The article ends by highlighting the severe consequences of image-based abuse, particularly for women, and the need for a safer online environment.
Key takeaways:
- AI-generated images are being used for harmful purposes, including the creation of explicit images and deepfake pornography, often targeting women.
- Open source image generation software is difficult to control, and despite efforts from some community members, it's nearly impossible to prevent misuse.
- Some AI creators are attempting to discourage malicious use by creating ethical tools for AI, including image “guarding,” and allowing developers to control access to models uploaded to the platform.
- There are calls for more dialogue and collaboration between AI startups, open source developers, governments, women’s organizations, academics, and civil society to explore possible deterrents to nonconsensual AI porn that don’t hinder accessibility to open source models.