Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Researcher Introduces Filter for 'Unsafe' AI-generated Images

Nov 14, 2023 - techtimes.com
AI image generators are being exploited to create explicit or disturbing images, according to a study by Yiting Qu from the Center for IT-Security, Privacy, and Accountability (CISPA). The research found that 14.56% of images generated by four popular AI image generators fell into the "unsafe images" category, which includes sexually explicit, violent, disturbing, hateful, and political content. Qu proposed a filter that calculates the distance between generated images and defined unsafe words, replacing images that violate a specified threshold with a black color field.

Qu also suggested three remedies to mitigate the generation of harmful images: curating training data more effectively, implementing regulations on user-input prompts, and establishing mechanisms to classify and delete unsafe images online. Despite the need for a balance between content freedom and security, Qu emphasized the importance of strict regulations to prevent harmful images from circulating widely on mainstream platforms.

Key takeaways:

  • AI image generators are being exploited to create explicit or disturbing images, with 14.56% of images generated by four renowned AI image generators falling into the "unsafe images" category.
  • Yiting Qu, a researcher at CISPA, has proposed a filter that calculates the distance between generated images and defined unsafe words, replacing violating images with a black color field.
  • Qu suggests three remedies to mitigate the generation of harmful images: curating training data more effectively, implementing regulations on user-input prompts, and establishing mechanisms to classify and delete unsafe images online.
  • Despite the balance between content freedom and security, Qu stresses the need for stringent regulations to prevent harmful images from gaining widespread circulation on mainstream platforms.
View Full Article

Comments (0)

Be the first to comment!