Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Microsoft's Copilot now blocks some prompts that generated violent and sexual images

Mar 08, 2024 - engadget.com
Microsoft has reportedly blocked several prompts in its AI tool, Copilot, that were generating violent, sexual, and other inappropriate images. The move comes after a company engineer raised serious concerns about the tool's output with the Federal Trade Commission. Now, when users enter certain terms like “pro choice,” “four twenty,” or “pro life,” they receive a message stating that these prompts are blocked and repeated violations could lead to suspension. However, it is still possible to generate violent imagery with prompts like “car accident.”

Microsoft engineer Shane Jones has been warning about the inappropriate images generated by Microsoft's OpenAI-powered systems for months. He found that even seemingly harmless prompts could lead to disturbing images, such as demons eating infants or Darth Vader threatening a baby, when the term “pro-choice" was entered. Jones has written to the FTC and Microsoft's board of directors about his concerns. In response, Microsoft stated that it is continuously monitoring and adjusting the system to strengthen safety filters and prevent misuse.

Key takeaways:

  • Microsoft has blocked several prompts in its Copilot tool that led the AI to generate violent, sexual and other inappropriate images.
  • The changes were made after an engineer at the company raised serious concerns about Microsoft's Generative AI technology with the Federal Trade Commission.
  • Despite the changes, CNBC found that it was still possible to generate violent imagery through certain prompts, and users can still convince the AI to create images of copyrighted works.
  • Microsoft has stated that it is continuously monitoring and making adjustments to strengthen safety filters and prevent misuse of the system.
View Full Article

Comments (0)

Be the first to comment!